00:00:00.001 Started by upstream project "autotest-nightly" build number 4350 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3713 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.090 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.106 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.123 Using shallow fetch with depth 1 00:00:00.123 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.123 > git --version # timeout=10 00:00:00.147 > git --version # 'git version 2.39.2' 00:00:00.147 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.170 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.170 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.102 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.117 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.128 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.129 > git config core.sparsecheckout # timeout=10 00:00:07.139 > git read-tree -mu HEAD # timeout=10 00:00:07.154 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.241 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.241 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.340 [Pipeline] Start of Pipeline 00:00:07.352 [Pipeline] library 00:00:07.354 Loading library shm_lib@master 00:00:07.354 Library shm_lib@master is cached. Copying from home. 00:00:07.370 [Pipeline] node 00:00:07.381 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.383 [Pipeline] { 00:00:07.389 [Pipeline] catchError 00:00:07.390 [Pipeline] { 00:00:07.399 [Pipeline] wrap 00:00:07.405 [Pipeline] { 00:00:07.412 [Pipeline] stage 00:00:07.413 [Pipeline] { (Prologue) 00:00:07.616 [Pipeline] sh 00:00:07.898 + logger -p user.info -t JENKINS-CI 00:00:07.914 [Pipeline] echo 00:00:07.915 Node: WFP4 00:00:07.922 [Pipeline] sh 00:00:08.217 [Pipeline] setCustomBuildProperty 00:00:08.229 [Pipeline] echo 00:00:08.230 Cleanup processes 00:00:08.236 [Pipeline] sh 00:00:08.518 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.518 3360284 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.527 [Pipeline] sh 00:00:08.805 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.805 ++ grep -v 'sudo pgrep' 00:00:08.805 ++ awk '{print $1}' 00:00:08.805 + sudo kill -9 00:00:08.805 + true 00:00:08.820 [Pipeline] cleanWs 00:00:08.829 [WS-CLEANUP] Deleting project workspace... 00:00:08.829 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.836 [WS-CLEANUP] done 00:00:08.840 [Pipeline] setCustomBuildProperty 00:00:08.856 [Pipeline] sh 00:00:09.137 + sudo git config --global --replace-all safe.directory '*' 00:00:09.227 [Pipeline] httpRequest 00:00:10.396 [Pipeline] echo 00:00:10.397 Sorcerer 10.211.164.112 is alive 00:00:10.406 [Pipeline] retry 00:00:10.407 [Pipeline] { 00:00:10.422 [Pipeline] httpRequest 00:00:10.425 HttpMethod: GET 00:00:10.426 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.427 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.429 Response Code: HTTP/1.1 200 OK 00:00:10.429 Success: Status code 200 is in the accepted range: 200,404 00:00:10.430 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.460 [Pipeline] } 00:00:11.471 [Pipeline] // retry 00:00:11.477 [Pipeline] sh 00:00:11.754 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.769 [Pipeline] httpRequest 00:00:12.124 [Pipeline] echo 00:00:12.126 Sorcerer 10.211.164.112 is alive 00:00:12.135 [Pipeline] retry 00:00:12.136 [Pipeline] { 00:00:12.151 [Pipeline] httpRequest 00:00:12.155 HttpMethod: GET 00:00:12.155 URL: http://10.211.164.112/packages/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:00:12.156 Sending request to url: http://10.211.164.112/packages/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:00:12.174 Response Code: HTTP/1.1 200 OK 00:00:12.174 Success: Status code 200 is in the accepted range: 200,404 00:00:12.175 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:01:39.011 [Pipeline] } 00:01:39.028 [Pipeline] // retry 00:01:39.036 [Pipeline] sh 00:01:39.325 + tar --no-same-owner -xf spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:01:41.877 [Pipeline] sh 00:01:42.169 + git -C spdk log --oneline -n5 00:01:42.169 52a413487 bdev: do not retry nomem I/Os during aborting them 00:01:42.169 d13942918 bdev: simplify bdev_reset_freeze_channel 00:01:42.169 0edc184ec accel/mlx5: Support mkey registration 00:01:42.169 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:01:42.169 1ae735a5d nvme: add poll_group interrupt callback 00:01:42.179 [Pipeline] } 00:01:42.193 [Pipeline] // stage 00:01:42.201 [Pipeline] stage 00:01:42.203 [Pipeline] { (Prepare) 00:01:42.218 [Pipeline] writeFile 00:01:42.233 [Pipeline] sh 00:01:42.518 + logger -p user.info -t JENKINS-CI 00:01:42.530 [Pipeline] sh 00:01:42.815 + logger -p user.info -t JENKINS-CI 00:01:42.827 [Pipeline] sh 00:01:43.113 + cat autorun-spdk.conf 00:01:43.113 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.113 SPDK_TEST_NVMF=1 00:01:43.113 SPDK_TEST_NVME_CLI=1 00:01:43.113 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.113 SPDK_TEST_NVMF_NICS=e810 00:01:43.113 SPDK_RUN_ASAN=1 00:01:43.113 SPDK_RUN_UBSAN=1 00:01:43.113 NET_TYPE=phy 00:01:43.121 RUN_NIGHTLY=1 00:01:43.125 [Pipeline] readFile 00:01:43.150 [Pipeline] withEnv 00:01:43.152 [Pipeline] { 00:01:43.165 [Pipeline] sh 00:01:43.452 + set -ex 00:01:43.452 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:43.452 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.452 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.452 ++ SPDK_TEST_NVMF=1 00:01:43.452 ++ SPDK_TEST_NVME_CLI=1 00:01:43.452 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.452 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.452 ++ SPDK_RUN_ASAN=1 00:01:43.452 ++ SPDK_RUN_UBSAN=1 00:01:43.452 ++ NET_TYPE=phy 00:01:43.452 ++ RUN_NIGHTLY=1 00:01:43.452 + case $SPDK_TEST_NVMF_NICS in 00:01:43.452 + DRIVERS=ice 00:01:43.452 + [[ tcp == \r\d\m\a ]] 00:01:43.452 + [[ -n ice ]] 00:01:43.452 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:43.452 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:43.452 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:43.452 rmmod: ERROR: Module i40iw is not currently loaded 00:01:43.452 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:43.452 + true 00:01:43.452 + for D in $DRIVERS 00:01:43.452 + sudo modprobe ice 00:01:43.452 + exit 0 00:01:43.462 [Pipeline] } 00:01:43.476 [Pipeline] // withEnv 00:01:43.481 [Pipeline] } 00:01:43.494 [Pipeline] // stage 00:01:43.503 [Pipeline] catchError 00:01:43.505 [Pipeline] { 00:01:43.519 [Pipeline] timeout 00:01:43.519 Timeout set to expire in 1 hr 0 min 00:01:43.521 [Pipeline] { 00:01:43.535 [Pipeline] stage 00:01:43.537 [Pipeline] { (Tests) 00:01:43.550 [Pipeline] sh 00:01:43.837 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:43.837 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:43.837 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:43.837 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:43.837 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.837 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:43.837 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:43.837 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:43.837 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:43.837 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:43.837 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:43.837 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:43.837 + source /etc/os-release 00:01:43.837 ++ NAME='Fedora Linux' 00:01:43.837 ++ VERSION='39 (Cloud Edition)' 00:01:43.837 ++ ID=fedora 00:01:43.837 ++ VERSION_ID=39 00:01:43.837 ++ VERSION_CODENAME= 00:01:43.837 ++ PLATFORM_ID=platform:f39 00:01:43.837 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:43.837 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:43.837 ++ LOGO=fedora-logo-icon 00:01:43.837 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:43.837 ++ HOME_URL=https://fedoraproject.org/ 00:01:43.837 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:43.837 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:43.837 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:43.837 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:43.837 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:43.837 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:43.837 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:43.837 ++ SUPPORT_END=2024-11-12 00:01:43.837 ++ VARIANT='Cloud Edition' 00:01:43.837 ++ VARIANT_ID=cloud 00:01:43.837 + uname -a 00:01:43.837 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:43.837 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:46.379 Hugepages 00:01:46.379 node hugesize free / total 00:01:46.379 node0 1048576kB 0 / 0 00:01:46.379 node0 2048kB 0 / 0 00:01:46.379 node1 1048576kB 0 / 0 00:01:46.379 node1 2048kB 0 / 0 00:01:46.379 00:01:46.379 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.379 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:46.379 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:46.379 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:46.379 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:46.379 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:46.379 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:46.379 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:46.379 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:46.379 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:46.379 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:46.379 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:46.379 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:46.379 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:46.379 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:46.379 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:46.379 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:46.379 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:46.379 + rm -f /tmp/spdk-ld-path 00:01:46.379 + source autorun-spdk.conf 00:01:46.379 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.379 ++ SPDK_TEST_NVMF=1 00:01:46.379 ++ SPDK_TEST_NVME_CLI=1 00:01:46.379 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.379 ++ SPDK_TEST_NVMF_NICS=e810 00:01:46.379 ++ SPDK_RUN_ASAN=1 00:01:46.379 ++ SPDK_RUN_UBSAN=1 00:01:46.379 ++ NET_TYPE=phy 00:01:46.379 ++ RUN_NIGHTLY=1 00:01:46.379 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.379 + [[ -n '' ]] 00:01:46.379 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.379 + for M in /var/spdk/build-*-manifest.txt 00:01:46.379 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:46.379 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.379 + for M in /var/spdk/build-*-manifest.txt 00:01:46.379 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.379 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.379 + for M in /var/spdk/build-*-manifest.txt 00:01:46.379 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.379 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.379 ++ uname 00:01:46.379 + [[ Linux == \L\i\n\u\x ]] 00:01:46.379 + sudo dmesg -T 00:01:46.379 + sudo dmesg --clear 00:01:46.379 + dmesg_pid=3361210 00:01:46.379 + [[ Fedora Linux == FreeBSD ]] 00:01:46.379 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.379 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.379 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.379 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:46.379 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:46.379 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.379 + export FIO_BIN=/usr/src/fio-static/fio 00:01:46.379 + sudo dmesg -Tw 00:01:46.379 + FIO_BIN=/usr/src/fio-static/fio 00:01:46.379 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.379 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.379 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.379 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.379 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.379 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.379 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.379 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.379 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:46.379 12:04:53 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:46.379 12:04:53 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:46.379 12:04:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:46.379 12:04:53 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:46.379 12:04:53 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:46.639 12:04:53 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:46.639 12:04:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:46.639 12:04:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:46.639 12:04:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:46.639 12:04:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.639 12:04:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.639 12:04:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.640 12:04:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.640 12:04:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.640 12:04:53 -- paths/export.sh@5 -- $ export PATH 00:01:46.640 12:04:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.640 12:04:53 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:46.640 12:04:53 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:46.640 12:04:53 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733828693.XXXXXX 00:01:46.640 12:04:53 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733828693.A9jE7V 00:01:46.640 12:04:53 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:46.640 12:04:53 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:46.640 12:04:53 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:46.640 12:04:53 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:46.640 12:04:53 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:46.640 12:04:53 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:46.640 12:04:53 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:46.640 12:04:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.640 12:04:53 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:46.640 12:04:53 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:46.640 12:04:53 -- pm/common@17 -- $ local monitor 00:01:46.640 12:04:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.640 12:04:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.640 12:04:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.640 12:04:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.640 12:04:53 -- pm/common@25 -- $ sleep 1 00:01:46.640 12:04:53 -- pm/common@21 -- $ date +%s 00:01:46.640 12:04:53 -- pm/common@21 -- $ date +%s 00:01:46.640 12:04:53 -- pm/common@21 -- $ date +%s 00:01:46.640 12:04:53 -- pm/common@21 -- $ date +%s 00:01:46.640 12:04:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733828693 00:01:46.640 12:04:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733828693 00:01:46.640 12:04:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733828693 00:01:46.640 12:04:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733828693 00:01:46.640 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733828693_collect-cpu-load.pm.log 00:01:46.640 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733828693_collect-vmstat.pm.log 00:01:46.640 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733828693_collect-cpu-temp.pm.log 00:01:46.640 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733828693_collect-bmc-pm.bmc.pm.log 00:01:47.611 12:04:54 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:47.611 12:04:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:47.611 12:04:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:47.611 12:04:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.611 12:04:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:47.611 Tue Dec 10 11:04:54 AM UTC 2024 00:01:47.611 12:04:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:47.611 v25.01-pre-324-g52a413487 00:01:47.611 12:04:54 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:47.611 12:04:54 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:47.611 12:04:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:47.611 12:04:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:47.611 12:04:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.611 ************************************ 00:01:47.611 START TEST asan 00:01:47.611 ************************************ 00:01:47.611 12:04:54 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:47.611 using asan 00:01:47.611 00:01:47.611 real 0m0.000s 00:01:47.611 user 0m0.000s 00:01:47.611 sys 0m0.000s 00:01:47.611 12:04:54 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:47.611 12:04:54 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.611 ************************************ 00:01:47.611 END TEST asan 00:01:47.611 ************************************ 00:01:47.611 12:04:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:47.612 12:04:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:47.612 12:04:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:47.612 12:04:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:47.612 12:04:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.612 ************************************ 00:01:47.612 START TEST ubsan 00:01:47.612 ************************************ 00:01:47.612 12:04:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:47.612 using ubsan 00:01:47.612 00:01:47.612 real 0m0.000s 00:01:47.612 user 0m0.000s 00:01:47.612 sys 0m0.000s 00:01:47.612 12:04:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:47.612 12:04:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.612 ************************************ 00:01:47.612 END TEST ubsan 00:01:47.612 ************************************ 00:01:47.612 12:04:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:47.612 12:04:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:47.612 12:04:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:47.612 12:04:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:47.612 12:04:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:47.612 12:04:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:47.612 12:04:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:47.612 12:04:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:47.612 12:04:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:47.871 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:47.871 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:48.131 Using 'verbs' RDMA provider 00:02:01.299 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:11.290 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:11.549 Creating mk/config.mk...done. 00:02:11.549 Creating mk/cc.flags.mk...done. 00:02:11.549 Type 'make' to build. 00:02:11.549 12:05:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:11.549 12:05:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.549 12:05:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.549 12:05:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.550 ************************************ 00:02:11.550 START TEST make 00:02:11.550 ************************************ 00:02:11.550 12:05:18 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:12.119 make[1]: Nothing to be done for 'all'. 00:02:20.251 The Meson build system 00:02:20.251 Version: 1.5.0 00:02:20.251 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:20.251 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:20.251 Build type: native build 00:02:20.251 Program cat found: YES (/usr/bin/cat) 00:02:20.251 Project name: DPDK 00:02:20.251 Project version: 24.03.0 00:02:20.251 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:20.251 C linker for the host machine: cc ld.bfd 2.40-14 00:02:20.251 Host machine cpu family: x86_64 00:02:20.251 Host machine cpu: x86_64 00:02:20.251 Message: ## Building in Developer Mode ## 00:02:20.251 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:20.251 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:20.251 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:20.251 Program python3 found: YES (/usr/bin/python3) 00:02:20.251 Program cat found: YES (/usr/bin/cat) 00:02:20.251 Compiler for C supports arguments -march=native: YES 00:02:20.251 Checking for size of "void *" : 8 00:02:20.251 Checking for size of "void *" : 8 (cached) 00:02:20.252 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:20.252 Library m found: YES 00:02:20.252 Library numa found: YES 00:02:20.252 Has header "numaif.h" : YES 00:02:20.252 Library fdt found: NO 00:02:20.252 Library execinfo found: NO 00:02:20.252 Has header "execinfo.h" : YES 00:02:20.252 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:20.252 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:20.252 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:20.252 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:20.252 Run-time dependency openssl found: YES 3.1.1 00:02:20.252 Run-time dependency libpcap found: YES 1.10.4 00:02:20.252 Has header "pcap.h" with dependency libpcap: YES 00:02:20.252 Compiler for C supports arguments -Wcast-qual: YES 00:02:20.252 Compiler for C supports arguments -Wdeprecated: YES 00:02:20.252 Compiler for C supports arguments -Wformat: YES 00:02:20.252 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:20.252 Compiler for C supports arguments -Wformat-security: NO 00:02:20.252 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:20.252 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:20.252 Compiler for C supports arguments -Wnested-externs: YES 00:02:20.252 Compiler for C supports arguments -Wold-style-definition: YES 00:02:20.252 Compiler for C supports arguments -Wpointer-arith: YES 00:02:20.252 Compiler for C supports arguments -Wsign-compare: YES 00:02:20.252 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:20.252 Compiler for C supports arguments -Wundef: YES 00:02:20.252 Compiler for C supports arguments -Wwrite-strings: YES 00:02:20.252 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:20.252 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:20.252 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:20.252 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:20.252 Program objdump found: YES (/usr/bin/objdump) 00:02:20.252 Compiler for C supports arguments -mavx512f: YES 00:02:20.252 Checking if "AVX512 checking" compiles: YES 00:02:20.252 Fetching value of define "__SSE4_2__" : 1 00:02:20.252 Fetching value of define "__AES__" : 1 00:02:20.252 Fetching value of define "__AVX__" : 1 00:02:20.252 Fetching value of define "__AVX2__" : 1 00:02:20.252 Fetching value of define "__AVX512BW__" : 1 00:02:20.252 Fetching value of define "__AVX512CD__" : 1 00:02:20.252 Fetching value of define "__AVX512DQ__" : 1 00:02:20.252 Fetching value of define "__AVX512F__" : 1 00:02:20.252 Fetching value of define "__AVX512VL__" : 1 00:02:20.252 Fetching value of define "__PCLMUL__" : 1 00:02:20.252 Fetching value of define "__RDRND__" : 1 00:02:20.252 Fetching value of define "__RDSEED__" : 1 00:02:20.252 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:20.252 Fetching value of define "__znver1__" : (undefined) 00:02:20.252 Fetching value of define "__znver2__" : (undefined) 00:02:20.252 Fetching value of define "__znver3__" : (undefined) 00:02:20.252 Fetching value of define "__znver4__" : (undefined) 00:02:20.252 Library asan found: YES 00:02:20.252 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:20.252 Message: lib/log: Defining dependency "log" 00:02:20.252 Message: lib/kvargs: Defining dependency "kvargs" 00:02:20.252 Message: lib/telemetry: Defining dependency "telemetry" 00:02:20.252 Library rt found: YES 00:02:20.252 Checking for function "getentropy" : NO 00:02:20.252 Message: lib/eal: Defining dependency "eal" 00:02:20.252 Message: lib/ring: Defining dependency "ring" 00:02:20.252 Message: lib/rcu: Defining dependency "rcu" 00:02:20.252 Message: lib/mempool: Defining dependency "mempool" 00:02:20.252 Message: lib/mbuf: Defining dependency "mbuf" 00:02:20.252 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:20.252 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:20.252 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:20.252 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:20.252 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:20.252 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:20.252 Compiler for C supports arguments -mpclmul: YES 00:02:20.252 Compiler for C supports arguments -maes: YES 00:02:20.252 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:20.252 Compiler for C supports arguments -mavx512bw: YES 00:02:20.252 Compiler for C supports arguments -mavx512dq: YES 00:02:20.252 Compiler for C supports arguments -mavx512vl: YES 00:02:20.252 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:20.252 Compiler for C supports arguments -mavx2: YES 00:02:20.252 Compiler for C supports arguments -mavx: YES 00:02:20.252 Message: lib/net: Defining dependency "net" 00:02:20.252 Message: lib/meter: Defining dependency "meter" 00:02:20.252 Message: lib/ethdev: Defining dependency "ethdev" 00:02:20.252 Message: lib/pci: Defining dependency "pci" 00:02:20.252 Message: lib/cmdline: Defining dependency "cmdline" 00:02:20.252 Message: lib/hash: Defining dependency "hash" 00:02:20.252 Message: lib/timer: Defining dependency "timer" 00:02:20.252 Message: lib/compressdev: Defining dependency "compressdev" 00:02:20.252 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:20.252 Message: lib/dmadev: Defining dependency "dmadev" 00:02:20.252 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:20.252 Message: lib/power: Defining dependency "power" 00:02:20.252 Message: lib/reorder: Defining dependency "reorder" 00:02:20.252 Message: lib/security: Defining dependency "security" 00:02:20.252 Has header "linux/userfaultfd.h" : YES 00:02:20.252 Has header "linux/vduse.h" : YES 00:02:20.252 Message: lib/vhost: Defining dependency "vhost" 00:02:20.252 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:20.252 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:20.252 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:20.252 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:20.252 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:20.252 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:20.252 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:20.252 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:20.252 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:20.252 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:20.252 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:20.252 Configuring doxy-api-html.conf using configuration 00:02:20.252 Configuring doxy-api-man.conf using configuration 00:02:20.252 Program mandb found: YES (/usr/bin/mandb) 00:02:20.252 Program sphinx-build found: NO 00:02:20.252 Configuring rte_build_config.h using configuration 00:02:20.252 Message: 00:02:20.252 ================= 00:02:20.252 Applications Enabled 00:02:20.252 ================= 00:02:20.252 00:02:20.252 apps: 00:02:20.252 00:02:20.252 00:02:20.252 Message: 00:02:20.252 ================= 00:02:20.252 Libraries Enabled 00:02:20.252 ================= 00:02:20.252 00:02:20.252 libs: 00:02:20.252 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:20.252 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:20.252 cryptodev, dmadev, power, reorder, security, vhost, 00:02:20.252 00:02:20.252 Message: 00:02:20.252 =============== 00:02:20.252 Drivers Enabled 00:02:20.252 =============== 00:02:20.252 00:02:20.252 common: 00:02:20.252 00:02:20.252 bus: 00:02:20.252 pci, vdev, 00:02:20.252 mempool: 00:02:20.252 ring, 00:02:20.252 dma: 00:02:20.252 00:02:20.252 net: 00:02:20.252 00:02:20.252 crypto: 00:02:20.252 00:02:20.252 compress: 00:02:20.252 00:02:20.252 vdpa: 00:02:20.252 00:02:20.252 00:02:20.252 Message: 00:02:20.252 ================= 00:02:20.252 Content Skipped 00:02:20.252 ================= 00:02:20.252 00:02:20.252 apps: 00:02:20.252 dumpcap: explicitly disabled via build config 00:02:20.252 graph: explicitly disabled via build config 00:02:20.252 pdump: explicitly disabled via build config 00:02:20.252 proc-info: explicitly disabled via build config 00:02:20.252 test-acl: explicitly disabled via build config 00:02:20.252 test-bbdev: explicitly disabled via build config 00:02:20.252 test-cmdline: explicitly disabled via build config 00:02:20.252 test-compress-perf: explicitly disabled via build config 00:02:20.252 test-crypto-perf: explicitly disabled via build config 00:02:20.252 test-dma-perf: explicitly disabled via build config 00:02:20.252 test-eventdev: explicitly disabled via build config 00:02:20.252 test-fib: explicitly disabled via build config 00:02:20.252 test-flow-perf: explicitly disabled via build config 00:02:20.252 test-gpudev: explicitly disabled via build config 00:02:20.252 test-mldev: explicitly disabled via build config 00:02:20.252 test-pipeline: explicitly disabled via build config 00:02:20.252 test-pmd: explicitly disabled via build config 00:02:20.252 test-regex: explicitly disabled via build config 00:02:20.252 test-sad: explicitly disabled via build config 00:02:20.252 test-security-perf: explicitly disabled via build config 00:02:20.252 00:02:20.252 libs: 00:02:20.252 argparse: explicitly disabled via build config 00:02:20.252 metrics: explicitly disabled via build config 00:02:20.252 acl: explicitly disabled via build config 00:02:20.252 bbdev: explicitly disabled via build config 00:02:20.252 bitratestats: explicitly disabled via build config 00:02:20.252 bpf: explicitly disabled via build config 00:02:20.252 cfgfile: explicitly disabled via build config 00:02:20.252 distributor: explicitly disabled via build config 00:02:20.252 efd: explicitly disabled via build config 00:02:20.252 eventdev: explicitly disabled via build config 00:02:20.252 dispatcher: explicitly disabled via build config 00:02:20.252 gpudev: explicitly disabled via build config 00:02:20.252 gro: explicitly disabled via build config 00:02:20.252 gso: explicitly disabled via build config 00:02:20.252 ip_frag: explicitly disabled via build config 00:02:20.252 jobstats: explicitly disabled via build config 00:02:20.252 latencystats: explicitly disabled via build config 00:02:20.252 lpm: explicitly disabled via build config 00:02:20.252 member: explicitly disabled via build config 00:02:20.252 pcapng: explicitly disabled via build config 00:02:20.252 rawdev: explicitly disabled via build config 00:02:20.252 regexdev: explicitly disabled via build config 00:02:20.252 mldev: explicitly disabled via build config 00:02:20.252 rib: explicitly disabled via build config 00:02:20.252 sched: explicitly disabled via build config 00:02:20.252 stack: explicitly disabled via build config 00:02:20.253 ipsec: explicitly disabled via build config 00:02:20.253 pdcp: explicitly disabled via build config 00:02:20.253 fib: explicitly disabled via build config 00:02:20.253 port: explicitly disabled via build config 00:02:20.253 pdump: explicitly disabled via build config 00:02:20.253 table: explicitly disabled via build config 00:02:20.253 pipeline: explicitly disabled via build config 00:02:20.253 graph: explicitly disabled via build config 00:02:20.253 node: explicitly disabled via build config 00:02:20.253 00:02:20.253 drivers: 00:02:20.253 common/cpt: not in enabled drivers build config 00:02:20.253 common/dpaax: not in enabled drivers build config 00:02:20.253 common/iavf: not in enabled drivers build config 00:02:20.253 common/idpf: not in enabled drivers build config 00:02:20.253 common/ionic: not in enabled drivers build config 00:02:20.253 common/mvep: not in enabled drivers build config 00:02:20.253 common/octeontx: not in enabled drivers build config 00:02:20.253 bus/auxiliary: not in enabled drivers build config 00:02:20.253 bus/cdx: not in enabled drivers build config 00:02:20.253 bus/dpaa: not in enabled drivers build config 00:02:20.253 bus/fslmc: not in enabled drivers build config 00:02:20.253 bus/ifpga: not in enabled drivers build config 00:02:20.253 bus/platform: not in enabled drivers build config 00:02:20.253 bus/uacce: not in enabled drivers build config 00:02:20.253 bus/vmbus: not in enabled drivers build config 00:02:20.253 common/cnxk: not in enabled drivers build config 00:02:20.253 common/mlx5: not in enabled drivers build config 00:02:20.253 common/nfp: not in enabled drivers build config 00:02:20.253 common/nitrox: not in enabled drivers build config 00:02:20.253 common/qat: not in enabled drivers build config 00:02:20.253 common/sfc_efx: not in enabled drivers build config 00:02:20.253 mempool/bucket: not in enabled drivers build config 00:02:20.253 mempool/cnxk: not in enabled drivers build config 00:02:20.253 mempool/dpaa: not in enabled drivers build config 00:02:20.253 mempool/dpaa2: not in enabled drivers build config 00:02:20.253 mempool/octeontx: not in enabled drivers build config 00:02:20.253 mempool/stack: not in enabled drivers build config 00:02:20.253 dma/cnxk: not in enabled drivers build config 00:02:20.253 dma/dpaa: not in enabled drivers build config 00:02:20.253 dma/dpaa2: not in enabled drivers build config 00:02:20.253 dma/hisilicon: not in enabled drivers build config 00:02:20.253 dma/idxd: not in enabled drivers build config 00:02:20.253 dma/ioat: not in enabled drivers build config 00:02:20.253 dma/skeleton: not in enabled drivers build config 00:02:20.253 net/af_packet: not in enabled drivers build config 00:02:20.253 net/af_xdp: not in enabled drivers build config 00:02:20.253 net/ark: not in enabled drivers build config 00:02:20.253 net/atlantic: not in enabled drivers build config 00:02:20.253 net/avp: not in enabled drivers build config 00:02:20.253 net/axgbe: not in enabled drivers build config 00:02:20.253 net/bnx2x: not in enabled drivers build config 00:02:20.253 net/bnxt: not in enabled drivers build config 00:02:20.253 net/bonding: not in enabled drivers build config 00:02:20.253 net/cnxk: not in enabled drivers build config 00:02:20.253 net/cpfl: not in enabled drivers build config 00:02:20.253 net/cxgbe: not in enabled drivers build config 00:02:20.253 net/dpaa: not in enabled drivers build config 00:02:20.253 net/dpaa2: not in enabled drivers build config 00:02:20.253 net/e1000: not in enabled drivers build config 00:02:20.253 net/ena: not in enabled drivers build config 00:02:20.253 net/enetc: not in enabled drivers build config 00:02:20.253 net/enetfec: not in enabled drivers build config 00:02:20.253 net/enic: not in enabled drivers build config 00:02:20.253 net/failsafe: not in enabled drivers build config 00:02:20.253 net/fm10k: not in enabled drivers build config 00:02:20.253 net/gve: not in enabled drivers build config 00:02:20.253 net/hinic: not in enabled drivers build config 00:02:20.253 net/hns3: not in enabled drivers build config 00:02:20.253 net/i40e: not in enabled drivers build config 00:02:20.253 net/iavf: not in enabled drivers build config 00:02:20.253 net/ice: not in enabled drivers build config 00:02:20.253 net/idpf: not in enabled drivers build config 00:02:20.253 net/igc: not in enabled drivers build config 00:02:20.253 net/ionic: not in enabled drivers build config 00:02:20.253 net/ipn3ke: not in enabled drivers build config 00:02:20.253 net/ixgbe: not in enabled drivers build config 00:02:20.253 net/mana: not in enabled drivers build config 00:02:20.253 net/memif: not in enabled drivers build config 00:02:20.253 net/mlx4: not in enabled drivers build config 00:02:20.253 net/mlx5: not in enabled drivers build config 00:02:20.253 net/mvneta: not in enabled drivers build config 00:02:20.253 net/mvpp2: not in enabled drivers build config 00:02:20.253 net/netvsc: not in enabled drivers build config 00:02:20.253 net/nfb: not in enabled drivers build config 00:02:20.253 net/nfp: not in enabled drivers build config 00:02:20.253 net/ngbe: not in enabled drivers build config 00:02:20.253 net/null: not in enabled drivers build config 00:02:20.253 net/octeontx: not in enabled drivers build config 00:02:20.253 net/octeon_ep: not in enabled drivers build config 00:02:20.253 net/pcap: not in enabled drivers build config 00:02:20.253 net/pfe: not in enabled drivers build config 00:02:20.253 net/qede: not in enabled drivers build config 00:02:20.253 net/ring: not in enabled drivers build config 00:02:20.253 net/sfc: not in enabled drivers build config 00:02:20.253 net/softnic: not in enabled drivers build config 00:02:20.253 net/tap: not in enabled drivers build config 00:02:20.253 net/thunderx: not in enabled drivers build config 00:02:20.253 net/txgbe: not in enabled drivers build config 00:02:20.253 net/vdev_netvsc: not in enabled drivers build config 00:02:20.253 net/vhost: not in enabled drivers build config 00:02:20.253 net/virtio: not in enabled drivers build config 00:02:20.253 net/vmxnet3: not in enabled drivers build config 00:02:20.253 raw/*: missing internal dependency, "rawdev" 00:02:20.253 crypto/armv8: not in enabled drivers build config 00:02:20.253 crypto/bcmfs: not in enabled drivers build config 00:02:20.253 crypto/caam_jr: not in enabled drivers build config 00:02:20.253 crypto/ccp: not in enabled drivers build config 00:02:20.253 crypto/cnxk: not in enabled drivers build config 00:02:20.253 crypto/dpaa_sec: not in enabled drivers build config 00:02:20.253 crypto/dpaa2_sec: not in enabled drivers build config 00:02:20.253 crypto/ipsec_mb: not in enabled drivers build config 00:02:20.253 crypto/mlx5: not in enabled drivers build config 00:02:20.253 crypto/mvsam: not in enabled drivers build config 00:02:20.253 crypto/nitrox: not in enabled drivers build config 00:02:20.253 crypto/null: not in enabled drivers build config 00:02:20.253 crypto/octeontx: not in enabled drivers build config 00:02:20.253 crypto/openssl: not in enabled drivers build config 00:02:20.253 crypto/scheduler: not in enabled drivers build config 00:02:20.253 crypto/uadk: not in enabled drivers build config 00:02:20.253 crypto/virtio: not in enabled drivers build config 00:02:20.253 compress/isal: not in enabled drivers build config 00:02:20.253 compress/mlx5: not in enabled drivers build config 00:02:20.253 compress/nitrox: not in enabled drivers build config 00:02:20.253 compress/octeontx: not in enabled drivers build config 00:02:20.253 compress/zlib: not in enabled drivers build config 00:02:20.253 regex/*: missing internal dependency, "regexdev" 00:02:20.253 ml/*: missing internal dependency, "mldev" 00:02:20.253 vdpa/ifc: not in enabled drivers build config 00:02:20.253 vdpa/mlx5: not in enabled drivers build config 00:02:20.253 vdpa/nfp: not in enabled drivers build config 00:02:20.253 vdpa/sfc: not in enabled drivers build config 00:02:20.253 event/*: missing internal dependency, "eventdev" 00:02:20.253 baseband/*: missing internal dependency, "bbdev" 00:02:20.253 gpu/*: missing internal dependency, "gpudev" 00:02:20.253 00:02:20.253 00:02:20.513 Build targets in project: 85 00:02:20.513 00:02:20.513 DPDK 24.03.0 00:02:20.513 00:02:20.513 User defined options 00:02:20.513 buildtype : debug 00:02:20.513 default_library : shared 00:02:20.513 libdir : lib 00:02:20.513 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:20.513 b_sanitize : address 00:02:20.513 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:20.513 c_link_args : 00:02:20.513 cpu_instruction_set: native 00:02:20.513 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:20.513 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:20.513 enable_docs : false 00:02:20.513 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:20.513 enable_kmods : false 00:02:20.513 max_lcores : 128 00:02:20.513 tests : false 00:02:20.513 00:02:20.513 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:20.779 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:20.779 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.039 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.039 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.040 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.040 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.040 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.040 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.040 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.040 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.040 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.040 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.040 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.040 [13/268] Linking static target lib/librte_kvargs.a 00:02:21.040 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.040 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.040 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.040 [17/268] Linking static target lib/librte_log.a 00:02:21.040 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.040 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.305 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.305 [21/268] Linking static target lib/librte_pci.a 00:02:21.305 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.305 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.305 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.305 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:21.305 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:21.566 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.566 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.566 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.566 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.566 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:21.566 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.566 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.566 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.566 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.566 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:21.566 [37/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.566 [38/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.566 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:21.566 [40/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.566 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:21.566 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:21.566 [43/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.566 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.566 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:21.566 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:21.566 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:21.566 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.566 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.566 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.566 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.566 [52/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.566 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.566 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:21.566 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.566 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.566 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:21.567 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.567 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:21.567 [60/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.567 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:21.567 [62/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.567 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.567 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:21.567 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.567 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:21.567 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:21.567 [68/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.567 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.567 [70/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.567 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.567 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.567 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.567 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:21.567 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.567 [76/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.567 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.567 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:21.567 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.567 [80/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.567 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.567 [82/268] Linking static target lib/librte_meter.a 00:02:21.567 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.567 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.567 [85/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.567 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.567 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:21.567 [88/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.826 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:21.826 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.826 [91/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.826 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.826 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.826 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:21.826 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:21.826 [96/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.826 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.826 [98/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.826 [99/268] Linking static target lib/librte_ring.a 00:02:21.826 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.826 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.826 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.826 [103/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.826 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.826 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.826 [106/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:21.826 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.826 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.826 [109/268] Linking static target lib/librte_telemetry.a 00:02:21.826 [110/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.826 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:21.826 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.827 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.827 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.827 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:21.827 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.827 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.827 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:21.827 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.827 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.827 [121/268] Linking static target lib/librte_mempool.a 00:02:21.827 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.827 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.827 [124/268] Linking static target lib/librte_net.a 00:02:21.827 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.827 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:21.827 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.827 [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.827 [129/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.827 [130/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.827 [131/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.827 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.086 [133/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:22.086 [134/268] Linking target lib/librte_log.so.24.1 00:02:22.086 [135/268] Linking static target lib/librte_eal.a 00:02:22.086 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.086 [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.086 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.086 [139/268] Linking static target lib/librte_cmdline.a 00:02:22.086 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.086 [141/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.086 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.086 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:22.086 [144/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.086 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.086 [146/268] Linking static target lib/librte_timer.a 00:02:22.086 [147/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.086 [148/268] Linking static target lib/librte_dmadev.a 00:02:22.086 [149/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.086 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.086 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.086 [152/268] Linking static target lib/librte_rcu.a 00:02:22.086 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.086 [154/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:22.086 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.086 [156/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.086 [157/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:22.086 [158/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:22.086 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.086 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.086 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.086 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.086 [163/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.086 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.086 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:22.086 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.086 [167/268] Linking target lib/librte_kvargs.so.24.1 00:02:22.086 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.086 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.086 [170/268] Linking static target lib/librte_compressdev.a 00:02:22.345 [171/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.345 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.345 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.345 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.345 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.345 [176/268] Linking target lib/librte_telemetry.so.24.1 00:02:22.345 [177/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.345 [178/268] Linking static target lib/librte_power.a 00:02:22.345 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.345 [180/268] Linking static target lib/librte_mbuf.a 00:02:22.345 [181/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:22.345 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.345 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.345 [184/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.345 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.345 [186/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.345 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.345 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.345 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.345 [190/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.345 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.345 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:22.345 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.345 [194/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.345 [195/268] Linking static target lib/librte_security.a 00:02:22.345 [196/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.604 [197/268] Linking static target lib/librte_reorder.a 00:02:22.604 [198/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.604 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.604 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.605 [201/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.605 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.605 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.605 [204/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.605 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.605 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.605 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.605 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.605 [209/268] Linking static target drivers/librte_bus_pci.a 00:02:22.605 [210/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.605 [211/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.863 [212/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.863 [213/268] Linking static target lib/librte_hash.a 00:02:22.863 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.863 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.863 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.122 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.122 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.122 [219/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.122 [220/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:23.122 [221/268] Linking static target lib/librte_cryptodev.a 00:02:23.122 [222/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.381 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.640 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:23.640 [225/268] Linking static target lib/librte_ethdev.a 00:02:23.640 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.576 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.834 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.122 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.122 [230/268] Linking static target lib/librte_vhost.a 00:02:29.498 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.404 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.663 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.664 [234/268] Linking target lib/librte_eal.so.24.1 00:02:31.922 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:31.922 [236/268] Linking target lib/librte_pci.so.24.1 00:02:31.922 [237/268] Linking target lib/librte_ring.so.24.1 00:02:31.922 [238/268] Linking target lib/librte_meter.so.24.1 00:02:31.922 [239/268] Linking target lib/librte_timer.so.24.1 00:02:31.922 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:31.922 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:32.181 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:32.181 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:32.181 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:32.181 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:32.181 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:32.181 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:32.181 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:32.181 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:32.181 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:32.181 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:32.440 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:32.440 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:32.440 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:32.440 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:32.440 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:32.440 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:32.440 [258/268] Linking target lib/librte_net.so.24.1 00:02:32.700 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:32.700 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:32.700 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:32.700 [262/268] Linking target lib/librte_security.so.24.1 00:02:32.700 [263/268] Linking target lib/librte_hash.so.24.1 00:02:32.700 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:32.700 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:32.959 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:32.959 [267/268] Linking target lib/librte_power.so.24.1 00:02:32.959 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.959 INFO: autodetecting backend as ninja 00:02:32.959 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:42.939 CC lib/ut/ut.o 00:02:42.939 CC lib/log/log_flags.o 00:02:42.939 CC lib/log/log.o 00:02:42.939 CC lib/log/log_deprecated.o 00:02:42.939 CC lib/ut_mock/mock.o 00:02:43.198 LIB libspdk_log.a 00:02:43.198 LIB libspdk_ut.a 00:02:43.198 LIB libspdk_ut_mock.a 00:02:43.198 SO libspdk_ut.so.2.0 00:02:43.198 SO libspdk_log.so.7.1 00:02:43.198 SO libspdk_ut_mock.so.6.0 00:02:43.198 SYMLINK libspdk_ut.so 00:02:43.198 SYMLINK libspdk_ut_mock.so 00:02:43.198 SYMLINK libspdk_log.so 00:02:43.457 CC lib/dma/dma.o 00:02:43.457 CC lib/util/base64.o 00:02:43.457 CC lib/util/bit_array.o 00:02:43.457 CC lib/util/crc16.o 00:02:43.457 CC lib/util/cpuset.o 00:02:43.457 CXX lib/trace_parser/trace.o 00:02:43.457 CC lib/util/crc32.o 00:02:43.457 CC lib/util/crc32_ieee.o 00:02:43.457 CC lib/util/crc32c.o 00:02:43.457 CC lib/util/fd.o 00:02:43.457 CC lib/util/crc64.o 00:02:43.457 CC lib/util/dif.o 00:02:43.457 CC lib/util/fd_group.o 00:02:43.457 CC lib/util/file.o 00:02:43.457 CC lib/util/math.o 00:02:43.457 CC lib/util/hexlify.o 00:02:43.457 CC lib/util/net.o 00:02:43.457 CC lib/util/iov.o 00:02:43.457 CC lib/ioat/ioat.o 00:02:43.457 CC lib/util/pipe.o 00:02:43.457 CC lib/util/strerror_tls.o 00:02:43.457 CC lib/util/string.o 00:02:43.457 CC lib/util/uuid.o 00:02:43.457 CC lib/util/xor.o 00:02:43.457 CC lib/util/zipf.o 00:02:43.457 CC lib/util/md5.o 00:02:43.717 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.717 CC lib/vfio_user/host/vfio_user.o 00:02:43.717 LIB libspdk_dma.a 00:02:43.717 SO libspdk_dma.so.5.0 00:02:43.717 SYMLINK libspdk_dma.so 00:02:43.717 LIB libspdk_ioat.a 00:02:43.717 SO libspdk_ioat.so.7.0 00:02:43.976 LIB libspdk_vfio_user.a 00:02:43.976 SYMLINK libspdk_ioat.so 00:02:43.976 SO libspdk_vfio_user.so.5.0 00:02:43.976 SYMLINK libspdk_vfio_user.so 00:02:43.976 LIB libspdk_util.a 00:02:44.236 SO libspdk_util.so.10.1 00:02:44.236 SYMLINK libspdk_util.so 00:02:44.236 LIB libspdk_trace_parser.a 00:02:44.236 SO libspdk_trace_parser.so.6.0 00:02:44.494 SYMLINK libspdk_trace_parser.so 00:02:44.494 CC lib/idxd/idxd.o 00:02:44.494 CC lib/idxd/idxd_user.o 00:02:44.494 CC lib/idxd/idxd_kernel.o 00:02:44.494 CC lib/rdma_utils/rdma_utils.o 00:02:44.494 CC lib/vmd/vmd.o 00:02:44.494 CC lib/vmd/led.o 00:02:44.494 CC lib/conf/conf.o 00:02:44.494 CC lib/json/json_parse.o 00:02:44.494 CC lib/json/json_write.o 00:02:44.494 CC lib/json/json_util.o 00:02:44.494 CC lib/env_dpdk/env.o 00:02:44.494 CC lib/env_dpdk/pci.o 00:02:44.494 CC lib/env_dpdk/memory.o 00:02:44.494 CC lib/env_dpdk/threads.o 00:02:44.494 CC lib/env_dpdk/init.o 00:02:44.494 CC lib/env_dpdk/pci_virtio.o 00:02:44.494 CC lib/env_dpdk/pci_ioat.o 00:02:44.494 CC lib/env_dpdk/pci_vmd.o 00:02:44.494 CC lib/env_dpdk/pci_idxd.o 00:02:44.494 CC lib/env_dpdk/pci_event.o 00:02:44.494 CC lib/env_dpdk/sigbus_handler.o 00:02:44.494 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.494 CC lib/env_dpdk/pci_dpdk.o 00:02:44.494 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.752 LIB libspdk_conf.a 00:02:44.752 SO libspdk_conf.so.6.0 00:02:44.752 LIB libspdk_rdma_utils.a 00:02:45.010 SO libspdk_rdma_utils.so.1.0 00:02:45.010 LIB libspdk_json.a 00:02:45.010 SYMLINK libspdk_conf.so 00:02:45.010 SYMLINK libspdk_rdma_utils.so 00:02:45.010 SO libspdk_json.so.6.0 00:02:45.010 SYMLINK libspdk_json.so 00:02:45.268 LIB libspdk_idxd.a 00:02:45.268 SO libspdk_idxd.so.12.1 00:02:45.268 CC lib/rdma_provider/common.o 00:02:45.268 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.268 LIB libspdk_vmd.a 00:02:45.268 SO libspdk_vmd.so.6.0 00:02:45.268 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.268 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.268 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.268 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.268 SYMLINK libspdk_idxd.so 00:02:45.268 SYMLINK libspdk_vmd.so 00:02:45.526 LIB libspdk_rdma_provider.a 00:02:45.526 SO libspdk_rdma_provider.so.7.0 00:02:45.526 SYMLINK libspdk_rdma_provider.so 00:02:45.526 LIB libspdk_jsonrpc.a 00:02:45.526 SO libspdk_jsonrpc.so.6.0 00:02:45.784 SYMLINK libspdk_jsonrpc.so 00:02:46.042 LIB libspdk_env_dpdk.a 00:02:46.042 CC lib/rpc/rpc.o 00:02:46.042 SO libspdk_env_dpdk.so.15.1 00:02:46.042 SYMLINK libspdk_env_dpdk.so 00:02:46.042 LIB libspdk_rpc.a 00:02:46.301 SO libspdk_rpc.so.6.0 00:02:46.301 SYMLINK libspdk_rpc.so 00:02:46.560 CC lib/keyring/keyring.o 00:02:46.560 CC lib/keyring/keyring_rpc.o 00:02:46.560 CC lib/trace/trace.o 00:02:46.560 CC lib/notify/notify.o 00:02:46.560 CC lib/trace/trace_rpc.o 00:02:46.560 CC lib/notify/notify_rpc.o 00:02:46.560 CC lib/trace/trace_flags.o 00:02:46.819 LIB libspdk_notify.a 00:02:46.819 SO libspdk_notify.so.6.0 00:02:46.819 LIB libspdk_keyring.a 00:02:46.819 SYMLINK libspdk_notify.so 00:02:46.819 LIB libspdk_trace.a 00:02:46.819 SO libspdk_keyring.so.2.0 00:02:46.819 SO libspdk_trace.so.11.0 00:02:46.819 SYMLINK libspdk_keyring.so 00:02:46.819 SYMLINK libspdk_trace.so 00:02:47.387 CC lib/thread/thread.o 00:02:47.387 CC lib/thread/iobuf.o 00:02:47.387 CC lib/sock/sock.o 00:02:47.387 CC lib/sock/sock_rpc.o 00:02:47.646 LIB libspdk_sock.a 00:02:47.646 SO libspdk_sock.so.10.0 00:02:47.646 SYMLINK libspdk_sock.so 00:02:47.904 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.904 CC lib/nvme/nvme_ctrlr.o 00:02:47.904 CC lib/nvme/nvme_fabric.o 00:02:47.904 CC lib/nvme/nvme_ns.o 00:02:47.904 CC lib/nvme/nvme_pcie_common.o 00:02:47.904 CC lib/nvme/nvme_ns_cmd.o 00:02:47.904 CC lib/nvme/nvme_pcie.o 00:02:47.904 CC lib/nvme/nvme_qpair.o 00:02:47.904 CC lib/nvme/nvme_transport.o 00:02:47.904 CC lib/nvme/nvme.o 00:02:47.904 CC lib/nvme/nvme_quirks.o 00:02:47.904 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.904 CC lib/nvme/nvme_discovery.o 00:02:47.904 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.904 CC lib/nvme/nvme_tcp.o 00:02:47.904 CC lib/nvme/nvme_io_msg.o 00:02:47.904 CC lib/nvme/nvme_opal.o 00:02:47.904 CC lib/nvme/nvme_poll_group.o 00:02:47.904 CC lib/nvme/nvme_zns.o 00:02:47.904 CC lib/nvme/nvme_stubs.o 00:02:47.904 CC lib/nvme/nvme_auth.o 00:02:47.904 CC lib/nvme/nvme_cuse.o 00:02:47.904 CC lib/nvme/nvme_rdma.o 00:02:48.837 LIB libspdk_thread.a 00:02:48.837 SO libspdk_thread.so.11.0 00:02:48.837 SYMLINK libspdk_thread.so 00:02:49.095 CC lib/virtio/virtio.o 00:02:49.095 CC lib/virtio/virtio_vfio_user.o 00:02:49.095 CC lib/virtio/virtio_vhost_user.o 00:02:49.095 CC lib/virtio/virtio_pci.o 00:02:49.095 CC lib/blob/blobstore.o 00:02:49.095 CC lib/accel/accel.o 00:02:49.095 CC lib/accel/accel_rpc.o 00:02:49.095 CC lib/blob/request.o 00:02:49.095 CC lib/blob/zeroes.o 00:02:49.095 CC lib/accel/accel_sw.o 00:02:49.095 CC lib/blob/blob_bs_dev.o 00:02:49.095 CC lib/init/json_config.o 00:02:49.095 CC lib/init/subsystem.o 00:02:49.095 CC lib/init/subsystem_rpc.o 00:02:49.095 CC lib/init/rpc.o 00:02:49.095 CC lib/fsdev/fsdev.o 00:02:49.095 CC lib/fsdev/fsdev_io.o 00:02:49.095 CC lib/fsdev/fsdev_rpc.o 00:02:49.353 LIB libspdk_init.a 00:02:49.353 SO libspdk_init.so.6.0 00:02:49.353 LIB libspdk_virtio.a 00:02:49.353 SYMLINK libspdk_init.so 00:02:49.353 SO libspdk_virtio.so.7.0 00:02:49.610 SYMLINK libspdk_virtio.so 00:02:49.610 LIB libspdk_fsdev.a 00:02:49.610 SO libspdk_fsdev.so.2.0 00:02:49.610 CC lib/event/app.o 00:02:49.610 CC lib/event/reactor.o 00:02:49.610 CC lib/event/log_rpc.o 00:02:49.610 CC lib/event/scheduler_static.o 00:02:49.610 CC lib/event/app_rpc.o 00:02:49.867 SYMLINK libspdk_fsdev.so 00:02:50.124 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:50.124 LIB libspdk_accel.a 00:02:50.124 LIB libspdk_nvme.a 00:02:50.124 SO libspdk_accel.so.16.0 00:02:50.124 LIB libspdk_event.a 00:02:50.124 SO libspdk_nvme.so.15.0 00:02:50.124 SO libspdk_event.so.14.0 00:02:50.124 SYMLINK libspdk_accel.so 00:02:50.382 SYMLINK libspdk_event.so 00:02:50.382 SYMLINK libspdk_nvme.so 00:02:50.638 CC lib/bdev/bdev.o 00:02:50.638 CC lib/bdev/bdev_rpc.o 00:02:50.638 CC lib/bdev/bdev_zone.o 00:02:50.638 CC lib/bdev/scsi_nvme.o 00:02:50.638 CC lib/bdev/part.o 00:02:50.638 LIB libspdk_fuse_dispatcher.a 00:02:50.639 SO libspdk_fuse_dispatcher.so.1.0 00:02:50.639 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.012 LIB libspdk_blob.a 00:02:52.012 SO libspdk_blob.so.12.0 00:02:52.270 SYMLINK libspdk_blob.so 00:02:52.528 CC lib/blobfs/blobfs.o 00:02:52.528 CC lib/blobfs/tree.o 00:02:52.528 CC lib/lvol/lvol.o 00:02:53.093 LIB libspdk_bdev.a 00:02:53.093 SO libspdk_bdev.so.17.0 00:02:53.093 SYMLINK libspdk_bdev.so 00:02:53.093 LIB libspdk_blobfs.a 00:02:53.093 SO libspdk_blobfs.so.11.0 00:02:53.352 SYMLINK libspdk_blobfs.so 00:02:53.352 LIB libspdk_lvol.a 00:02:53.352 CC lib/nbd/nbd.o 00:02:53.352 CC lib/nbd/nbd_rpc.o 00:02:53.352 CC lib/ftl/ftl_core.o 00:02:53.352 CC lib/ftl/ftl_init.o 00:02:53.352 CC lib/nvmf/ctrlr.o 00:02:53.352 CC lib/ftl/ftl_io.o 00:02:53.352 CC lib/ftl/ftl_layout.o 00:02:53.352 CC lib/ftl/ftl_debug.o 00:02:53.352 CC lib/ftl/ftl_l2p.o 00:02:53.352 CC lib/nvmf/ctrlr_discovery.o 00:02:53.352 CC lib/nvmf/subsystem.o 00:02:53.352 CC lib/ftl/ftl_sb.o 00:02:53.352 CC lib/nvmf/ctrlr_bdev.o 00:02:53.352 SO libspdk_lvol.so.11.0 00:02:53.352 CC lib/nvmf/nvmf.o 00:02:53.352 CC lib/ftl/ftl_l2p_flat.o 00:02:53.352 CC lib/nvmf/nvmf_rpc.o 00:02:53.352 CC lib/ftl/ftl_nv_cache.o 00:02:53.352 CC lib/nvmf/transport.o 00:02:53.352 CC lib/ftl/ftl_band.o 00:02:53.352 CC lib/ftl/ftl_band_ops.o 00:02:53.352 CC lib/nvmf/tcp.o 00:02:53.352 CC lib/nvmf/stubs.o 00:02:53.352 CC lib/scsi/dev.o 00:02:53.352 CC lib/nvmf/mdns_server.o 00:02:53.352 CC lib/nvmf/rdma.o 00:02:53.352 CC lib/ftl/ftl_writer.o 00:02:53.352 CC lib/ftl/ftl_rq.o 00:02:53.352 CC lib/scsi/lun.o 00:02:53.352 CC lib/nvmf/auth.o 00:02:53.352 CC lib/ftl/ftl_reloc.o 00:02:53.352 CC lib/scsi/port.o 00:02:53.352 CC lib/scsi/scsi.o 00:02:53.352 CC lib/ftl/ftl_l2p_cache.o 00:02:53.352 CC lib/ublk/ublk.o 00:02:53.352 CC lib/ftl/ftl_p2l.o 00:02:53.352 CC lib/scsi/scsi_bdev.o 00:02:53.352 CC lib/scsi/scsi_pr.o 00:02:53.352 CC lib/ftl/ftl_p2l_log.o 00:02:53.352 CC lib/scsi/task.o 00:02:53.352 CC lib/ublk/ublk_rpc.o 00:02:53.352 CC lib/scsi/scsi_rpc.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.352 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.352 CC lib/ftl/utils/ftl_conf.o 00:02:53.352 CC lib/ftl/utils/ftl_md.o 00:02:53.352 CC lib/ftl/utils/ftl_mempool.o 00:02:53.352 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.352 CC lib/ftl/utils/ftl_property.o 00:02:53.352 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.352 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.352 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.352 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.352 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.352 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.352 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.352 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.352 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.352 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.352 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:53.352 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.352 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:53.352 CC lib/ftl/base/ftl_base_dev.o 00:02:53.352 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.352 CC lib/ftl/ftl_trace.o 00:02:53.611 SYMLINK libspdk_lvol.so 00:02:54.177 LIB libspdk_nbd.a 00:02:54.177 SO libspdk_nbd.so.7.0 00:02:54.177 LIB libspdk_scsi.a 00:02:54.177 SYMLINK libspdk_nbd.so 00:02:54.177 SO libspdk_scsi.so.9.0 00:02:54.177 SYMLINK libspdk_scsi.so 00:02:54.177 LIB libspdk_ublk.a 00:02:54.436 SO libspdk_ublk.so.3.0 00:02:54.436 SYMLINK libspdk_ublk.so 00:02:54.436 CC lib/iscsi/init_grp.o 00:02:54.436 CC lib/iscsi/conn.o 00:02:54.436 CC lib/iscsi/iscsi.o 00:02:54.436 CC lib/iscsi/param.o 00:02:54.436 CC lib/iscsi/portal_grp.o 00:02:54.436 CC lib/iscsi/iscsi_subsystem.o 00:02:54.436 CC lib/iscsi/tgt_node.o 00:02:54.436 CC lib/iscsi/task.o 00:02:54.436 CC lib/iscsi/iscsi_rpc.o 00:02:54.695 CC lib/vhost/vhost.o 00:02:54.695 CC lib/vhost/vhost_rpc.o 00:02:54.695 CC lib/vhost/vhost_scsi.o 00:02:54.695 CC lib/vhost/rte_vhost_user.o 00:02:54.695 CC lib/vhost/vhost_blk.o 00:02:54.695 LIB libspdk_ftl.a 00:02:54.954 SO libspdk_ftl.so.9.0 00:02:55.213 SYMLINK libspdk_ftl.so 00:02:55.472 LIB libspdk_vhost.a 00:02:55.472 SO libspdk_vhost.so.8.0 00:02:55.731 SYMLINK libspdk_vhost.so 00:02:55.731 LIB libspdk_nvmf.a 00:02:55.990 LIB libspdk_iscsi.a 00:02:55.990 SO libspdk_nvmf.so.20.0 00:02:55.990 SO libspdk_iscsi.so.8.0 00:02:55.990 SYMLINK libspdk_iscsi.so 00:02:55.990 SYMLINK libspdk_nvmf.so 00:02:56.557 CC module/env_dpdk/env_dpdk_rpc.o 00:02:56.557 CC module/accel/error/accel_error.o 00:02:56.557 CC module/accel/error/accel_error_rpc.o 00:02:56.557 CC module/accel/ioat/accel_ioat.o 00:02:56.557 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.557 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.557 CC module/accel/dsa/accel_dsa.o 00:02:56.816 CC module/sock/posix/posix.o 00:02:56.816 CC module/keyring/file/keyring_rpc.o 00:02:56.816 CC module/keyring/file/keyring.o 00:02:56.816 CC module/blob/bdev/blob_bdev.o 00:02:56.816 CC module/scheduler/gscheduler/gscheduler.o 00:02:56.816 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:56.816 CC module/keyring/linux/keyring.o 00:02:56.816 CC module/keyring/linux/keyring_rpc.o 00:02:56.816 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:56.816 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.816 CC module/accel/iaa/accel_iaa.o 00:02:56.816 CC module/fsdev/aio/fsdev_aio.o 00:02:56.816 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:56.816 CC module/fsdev/aio/linux_aio_mgr.o 00:02:56.816 LIB libspdk_env_dpdk_rpc.a 00:02:56.816 SO libspdk_env_dpdk_rpc.so.6.0 00:02:56.816 SYMLINK libspdk_env_dpdk_rpc.so 00:02:56.816 LIB libspdk_keyring_file.a 00:02:56.816 LIB libspdk_scheduler_dpdk_governor.a 00:02:56.816 LIB libspdk_accel_ioat.a 00:02:56.816 LIB libspdk_keyring_linux.a 00:02:56.816 SO libspdk_keyring_file.so.2.0 00:02:56.816 LIB libspdk_scheduler_gscheduler.a 00:02:56.816 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:56.816 LIB libspdk_accel_error.a 00:02:56.816 SO libspdk_accel_ioat.so.6.0 00:02:56.816 SO libspdk_keyring_linux.so.1.0 00:02:56.816 LIB libspdk_scheduler_dynamic.a 00:02:56.816 SO libspdk_scheduler_gscheduler.so.4.0 00:02:56.816 LIB libspdk_accel_iaa.a 00:02:56.816 SO libspdk_accel_error.so.2.0 00:02:56.816 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.816 SYMLINK libspdk_keyring_file.so 00:02:57.075 SO libspdk_accel_iaa.so.3.0 00:02:57.075 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.075 SYMLINK libspdk_accel_ioat.so 00:02:57.075 SYMLINK libspdk_keyring_linux.so 00:02:57.075 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.075 LIB libspdk_accel_dsa.a 00:02:57.075 SO libspdk_accel_dsa.so.5.0 00:02:57.075 LIB libspdk_blob_bdev.a 00:02:57.075 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.075 SYMLINK libspdk_accel_error.so 00:02:57.075 SYMLINK libspdk_accel_iaa.so 00:02:57.075 SO libspdk_blob_bdev.so.12.0 00:02:57.075 SYMLINK libspdk_accel_dsa.so 00:02:57.075 SYMLINK libspdk_blob_bdev.so 00:02:57.334 LIB libspdk_fsdev_aio.a 00:02:57.334 SO libspdk_fsdev_aio.so.1.0 00:02:57.334 LIB libspdk_sock_posix.a 00:02:57.592 SO libspdk_sock_posix.so.6.0 00:02:57.592 CC module/bdev/delay/vbdev_delay.o 00:02:57.592 CC module/bdev/raid/bdev_raid.o 00:02:57.592 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.592 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.592 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.592 CC module/bdev/raid/raid0.o 00:02:57.592 CC module/bdev/raid/raid1.o 00:02:57.592 CC module/bdev/raid/concat.o 00:02:57.592 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.592 CC module/bdev/error/vbdev_error.o 00:02:57.592 CC module/bdev/ftl/bdev_ftl.o 00:02:57.592 CC module/bdev/malloc/bdev_malloc.o 00:02:57.592 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.592 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.592 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.592 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.592 CC module/bdev/nvme/bdev_nvme.o 00:02:57.592 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.592 CC module/bdev/nvme/nvme_rpc.o 00:02:57.592 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.592 SYMLINK libspdk_fsdev_aio.so 00:02:57.592 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.592 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.592 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.592 CC module/bdev/nvme/vbdev_opal.o 00:02:57.592 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.592 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.592 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.592 CC module/bdev/gpt/gpt.o 00:02:57.592 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.592 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.592 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.592 CC module/bdev/null/bdev_null.o 00:02:57.592 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.592 CC module/bdev/null/bdev_null_rpc.o 00:02:57.592 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.592 CC module/bdev/aio/bdev_aio.o 00:02:57.592 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.592 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.592 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.592 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.592 CC module/bdev/split/vbdev_split.o 00:02:57.592 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.592 SYMLINK libspdk_sock_posix.so 00:02:57.850 LIB libspdk_blobfs_bdev.a 00:02:57.850 SO libspdk_blobfs_bdev.so.6.0 00:02:57.851 LIB libspdk_bdev_split.a 00:02:57.851 LIB libspdk_bdev_error.a 00:02:57.851 LIB libspdk_bdev_null.a 00:02:57.851 SO libspdk_bdev_split.so.6.0 00:02:57.851 SO libspdk_bdev_error.so.6.0 00:02:57.851 SYMLINK libspdk_blobfs_bdev.so 00:02:57.851 LIB libspdk_bdev_gpt.a 00:02:57.851 SO libspdk_bdev_null.so.6.0 00:02:57.851 LIB libspdk_bdev_ftl.a 00:02:57.851 LIB libspdk_bdev_passthru.a 00:02:57.851 SO libspdk_bdev_gpt.so.6.0 00:02:57.851 SYMLINK libspdk_bdev_split.so 00:02:57.851 LIB libspdk_bdev_zone_block.a 00:02:57.851 SO libspdk_bdev_passthru.so.6.0 00:02:57.851 SO libspdk_bdev_ftl.so.6.0 00:02:57.851 SYMLINK libspdk_bdev_error.so 00:02:57.851 LIB libspdk_bdev_delay.a 00:02:57.851 SYMLINK libspdk_bdev_null.so 00:02:58.109 LIB libspdk_bdev_aio.a 00:02:58.109 SO libspdk_bdev_zone_block.so.6.0 00:02:58.109 LIB libspdk_bdev_malloc.a 00:02:58.109 SO libspdk_bdev_delay.so.6.0 00:02:58.109 LIB libspdk_bdev_iscsi.a 00:02:58.109 SYMLINK libspdk_bdev_gpt.so 00:02:58.109 SO libspdk_bdev_aio.so.6.0 00:02:58.109 SYMLINK libspdk_bdev_passthru.so 00:02:58.109 SYMLINK libspdk_bdev_ftl.so 00:02:58.109 SO libspdk_bdev_iscsi.so.6.0 00:02:58.109 SO libspdk_bdev_malloc.so.6.0 00:02:58.109 SYMLINK libspdk_bdev_zone_block.so 00:02:58.109 SYMLINK libspdk_bdev_delay.so 00:02:58.109 SYMLINK libspdk_bdev_aio.so 00:02:58.109 SYMLINK libspdk_bdev_malloc.so 00:02:58.109 SYMLINK libspdk_bdev_iscsi.so 00:02:58.109 LIB libspdk_bdev_lvol.a 00:02:58.109 LIB libspdk_bdev_virtio.a 00:02:58.109 SO libspdk_bdev_lvol.so.6.0 00:02:58.109 SO libspdk_bdev_virtio.so.6.0 00:02:58.109 SYMLINK libspdk_bdev_lvol.so 00:02:58.109 SYMLINK libspdk_bdev_virtio.so 00:02:58.677 LIB libspdk_bdev_raid.a 00:02:58.677 SO libspdk_bdev_raid.so.6.0 00:02:58.677 SYMLINK libspdk_bdev_raid.so 00:03:00.055 LIB libspdk_bdev_nvme.a 00:03:00.055 SO libspdk_bdev_nvme.so.7.1 00:03:00.055 SYMLINK libspdk_bdev_nvme.so 00:03:00.623 CC module/event/subsystems/sock/sock.o 00:03:00.623 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.623 CC module/event/subsystems/vmd/vmd.o 00:03:00.623 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.623 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.623 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.623 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.623 CC module/event/subsystems/keyring/keyring.o 00:03:00.623 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.882 LIB libspdk_event_sock.a 00:03:00.882 LIB libspdk_event_keyring.a 00:03:00.882 SO libspdk_event_sock.so.5.0 00:03:00.882 LIB libspdk_event_scheduler.a 00:03:00.882 LIB libspdk_event_vhost_blk.a 00:03:00.882 LIB libspdk_event_vmd.a 00:03:00.882 LIB libspdk_event_fsdev.a 00:03:00.882 SO libspdk_event_keyring.so.1.0 00:03:00.882 LIB libspdk_event_iobuf.a 00:03:00.882 SO libspdk_event_scheduler.so.4.0 00:03:00.882 SO libspdk_event_vhost_blk.so.3.0 00:03:00.882 SO libspdk_event_vmd.so.6.0 00:03:00.882 SO libspdk_event_fsdev.so.1.0 00:03:00.882 SO libspdk_event_iobuf.so.3.0 00:03:00.882 SYMLINK libspdk_event_sock.so 00:03:00.882 SYMLINK libspdk_event_keyring.so 00:03:00.882 SYMLINK libspdk_event_scheduler.so 00:03:00.882 SYMLINK libspdk_event_vhost_blk.so 00:03:00.882 SYMLINK libspdk_event_vmd.so 00:03:00.882 SYMLINK libspdk_event_fsdev.so 00:03:00.882 SYMLINK libspdk_event_iobuf.so 00:03:01.449 CC module/event/subsystems/accel/accel.o 00:03:01.449 LIB libspdk_event_accel.a 00:03:01.449 SO libspdk_event_accel.so.6.0 00:03:01.449 SYMLINK libspdk_event_accel.so 00:03:01.707 CC module/event/subsystems/bdev/bdev.o 00:03:01.965 LIB libspdk_event_bdev.a 00:03:01.965 SO libspdk_event_bdev.so.6.0 00:03:01.965 SYMLINK libspdk_event_bdev.so 00:03:02.223 CC module/event/subsystems/nbd/nbd.o 00:03:02.223 CC module/event/subsystems/scsi/scsi.o 00:03:02.223 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.223 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.223 CC module/event/subsystems/ublk/ublk.o 00:03:02.481 LIB libspdk_event_nbd.a 00:03:02.481 LIB libspdk_event_ublk.a 00:03:02.481 LIB libspdk_event_scsi.a 00:03:02.481 SO libspdk_event_nbd.so.6.0 00:03:02.481 SO libspdk_event_ublk.so.3.0 00:03:02.481 SO libspdk_event_scsi.so.6.0 00:03:02.481 LIB libspdk_event_nvmf.a 00:03:02.481 SYMLINK libspdk_event_nbd.so 00:03:02.481 SYMLINK libspdk_event_ublk.so 00:03:02.481 SO libspdk_event_nvmf.so.6.0 00:03:02.481 SYMLINK libspdk_event_scsi.so 00:03:02.739 SYMLINK libspdk_event_nvmf.so 00:03:02.739 CC module/event/subsystems/iscsi/iscsi.o 00:03:02.997 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:02.997 LIB libspdk_event_iscsi.a 00:03:02.998 LIB libspdk_event_vhost_scsi.a 00:03:02.998 SO libspdk_event_iscsi.so.6.0 00:03:02.998 SO libspdk_event_vhost_scsi.so.3.0 00:03:02.998 SYMLINK libspdk_event_iscsi.so 00:03:02.998 SYMLINK libspdk_event_vhost_scsi.so 00:03:03.256 SO libspdk.so.6.0 00:03:03.256 SYMLINK libspdk.so 00:03:03.516 CXX app/trace/trace.o 00:03:03.516 CC app/spdk_lspci/spdk_lspci.o 00:03:03.516 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.516 TEST_HEADER include/spdk/accel.h 00:03:03.516 CC app/trace_record/trace_record.o 00:03:03.516 TEST_HEADER include/spdk/accel_module.h 00:03:03.516 CC app/spdk_nvme_identify/identify.o 00:03:03.516 TEST_HEADER include/spdk/base64.h 00:03:03.516 TEST_HEADER include/spdk/assert.h 00:03:03.516 TEST_HEADER include/spdk/barrier.h 00:03:03.516 CC app/spdk_top/spdk_top.o 00:03:03.516 TEST_HEADER include/spdk/bdev.h 00:03:03.516 TEST_HEADER include/spdk/bdev_module.h 00:03:03.516 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.516 CC test/rpc_client/rpc_client_test.o 00:03:03.516 TEST_HEADER include/spdk/bit_pool.h 00:03:03.516 TEST_HEADER include/spdk/bit_array.h 00:03:03.516 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.516 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.516 TEST_HEADER include/spdk/blob.h 00:03:03.516 CC app/spdk_nvme_perf/perf.o 00:03:03.516 TEST_HEADER include/spdk/conf.h 00:03:03.516 TEST_HEADER include/spdk/blobfs.h 00:03:03.516 TEST_HEADER include/spdk/config.h 00:03:03.516 TEST_HEADER include/spdk/cpuset.h 00:03:03.516 TEST_HEADER include/spdk/crc16.h 00:03:03.516 TEST_HEADER include/spdk/dif.h 00:03:03.516 TEST_HEADER include/spdk/crc32.h 00:03:03.516 TEST_HEADER include/spdk/crc64.h 00:03:03.516 TEST_HEADER include/spdk/dma.h 00:03:03.516 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.516 TEST_HEADER include/spdk/endian.h 00:03:03.516 TEST_HEADER include/spdk/env.h 00:03:03.516 TEST_HEADER include/spdk/event.h 00:03:03.516 TEST_HEADER include/spdk/fd_group.h 00:03:03.516 TEST_HEADER include/spdk/fd.h 00:03:03.516 TEST_HEADER include/spdk/file.h 00:03:03.516 TEST_HEADER include/spdk/fsdev_module.h 00:03:03.516 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.516 TEST_HEADER include/spdk/ftl.h 00:03:03.516 TEST_HEADER include/spdk/fsdev.h 00:03:03.516 TEST_HEADER include/spdk/hexlify.h 00:03:03.516 TEST_HEADER include/spdk/idxd.h 00:03:03.516 TEST_HEADER include/spdk/histogram_data.h 00:03:03.516 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.516 TEST_HEADER include/spdk/init.h 00:03:03.516 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.516 TEST_HEADER include/spdk/ioat.h 00:03:03.516 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.516 TEST_HEADER include/spdk/keyring.h 00:03:03.831 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.831 TEST_HEADER include/spdk/keyring_module.h 00:03:03.831 TEST_HEADER include/spdk/json.h 00:03:03.831 TEST_HEADER include/spdk/likely.h 00:03:03.831 TEST_HEADER include/spdk/log.h 00:03:03.831 TEST_HEADER include/spdk/lvol.h 00:03:03.831 TEST_HEADER include/spdk/md5.h 00:03:03.831 TEST_HEADER include/spdk/mmio.h 00:03:03.831 TEST_HEADER include/spdk/memory.h 00:03:03.831 TEST_HEADER include/spdk/net.h 00:03:03.831 TEST_HEADER include/spdk/nbd.h 00:03:03.831 TEST_HEADER include/spdk/nvme.h 00:03:03.831 TEST_HEADER include/spdk/notify.h 00:03:03.831 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.831 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.831 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.831 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.831 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.831 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.831 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.831 CC app/iscsi_tgt/iscsi_tgt.o 00:03:03.831 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.831 TEST_HEADER include/spdk/nvmf.h 00:03:03.831 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.831 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.831 TEST_HEADER include/spdk/opal.h 00:03:03.831 TEST_HEADER include/spdk/pipe.h 00:03:03.831 TEST_HEADER include/spdk/opal_spec.h 00:03:03.831 TEST_HEADER include/spdk/pci_ids.h 00:03:03.831 TEST_HEADER include/spdk/reduce.h 00:03:03.831 TEST_HEADER include/spdk/scheduler.h 00:03:03.832 TEST_HEADER include/spdk/rpc.h 00:03:03.832 TEST_HEADER include/spdk/queue.h 00:03:03.832 TEST_HEADER include/spdk/scsi.h 00:03:03.832 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.832 TEST_HEADER include/spdk/sock.h 00:03:03.832 TEST_HEADER include/spdk/stdinc.h 00:03:03.832 TEST_HEADER include/spdk/string.h 00:03:03.832 TEST_HEADER include/spdk/trace.h 00:03:03.832 TEST_HEADER include/spdk/thread.h 00:03:03.832 TEST_HEADER include/spdk/trace_parser.h 00:03:03.832 TEST_HEADER include/spdk/tree.h 00:03:03.832 TEST_HEADER include/spdk/ublk.h 00:03:03.832 TEST_HEADER include/spdk/util.h 00:03:03.832 TEST_HEADER include/spdk/uuid.h 00:03:03.832 TEST_HEADER include/spdk/version.h 00:03:03.832 CC app/nvmf_tgt/nvmf_main.o 00:03:03.832 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.832 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.832 TEST_HEADER include/spdk/vhost.h 00:03:03.832 TEST_HEADER include/spdk/xor.h 00:03:03.832 TEST_HEADER include/spdk/zipf.h 00:03:03.832 CC app/spdk_dd/spdk_dd.o 00:03:03.832 TEST_HEADER include/spdk/vmd.h 00:03:03.832 CXX test/cpp_headers/accel.o 00:03:03.832 CXX test/cpp_headers/accel_module.o 00:03:03.832 CXX test/cpp_headers/assert.o 00:03:03.832 CXX test/cpp_headers/barrier.o 00:03:03.832 CXX test/cpp_headers/base64.o 00:03:03.832 CXX test/cpp_headers/bdev.o 00:03:03.832 CXX test/cpp_headers/bdev_module.o 00:03:03.832 CXX test/cpp_headers/bdev_zone.o 00:03:03.832 CXX test/cpp_headers/bit_array.o 00:03:03.832 CXX test/cpp_headers/blob_bdev.o 00:03:03.832 CXX test/cpp_headers/bit_pool.o 00:03:03.832 CXX test/cpp_headers/blobfs.o 00:03:03.832 CXX test/cpp_headers/blob.o 00:03:03.832 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.832 CXX test/cpp_headers/conf.o 00:03:03.832 CXX test/cpp_headers/config.o 00:03:03.832 CXX test/cpp_headers/cpuset.o 00:03:03.832 CXX test/cpp_headers/crc16.o 00:03:03.832 CXX test/cpp_headers/crc64.o 00:03:03.832 CXX test/cpp_headers/dma.o 00:03:03.832 CXX test/cpp_headers/endian.o 00:03:03.832 CXX test/cpp_headers/crc32.o 00:03:03.832 CXX test/cpp_headers/dif.o 00:03:03.832 CXX test/cpp_headers/env.o 00:03:03.832 CXX test/cpp_headers/env_dpdk.o 00:03:03.832 CXX test/cpp_headers/fd_group.o 00:03:03.832 CC app/spdk_tgt/spdk_tgt.o 00:03:03.832 CXX test/cpp_headers/event.o 00:03:03.832 CXX test/cpp_headers/file.o 00:03:03.832 CXX test/cpp_headers/fd.o 00:03:03.832 CXX test/cpp_headers/fsdev.o 00:03:03.832 CXX test/cpp_headers/fsdev_module.o 00:03:03.832 CXX test/cpp_headers/gpt_spec.o 00:03:03.832 CXX test/cpp_headers/ftl.o 00:03:03.832 CXX test/cpp_headers/histogram_data.o 00:03:03.832 CXX test/cpp_headers/hexlify.o 00:03:03.832 CXX test/cpp_headers/idxd_spec.o 00:03:03.832 CXX test/cpp_headers/idxd.o 00:03:03.832 CXX test/cpp_headers/init.o 00:03:03.832 CXX test/cpp_headers/ioat.o 00:03:03.832 CXX test/cpp_headers/iscsi_spec.o 00:03:03.832 CXX test/cpp_headers/ioat_spec.o 00:03:03.832 CXX test/cpp_headers/jsonrpc.o 00:03:03.832 CXX test/cpp_headers/json.o 00:03:03.832 CXX test/cpp_headers/keyring.o 00:03:03.832 CXX test/cpp_headers/keyring_module.o 00:03:03.832 CXX test/cpp_headers/likely.o 00:03:03.832 CXX test/cpp_headers/md5.o 00:03:03.832 CXX test/cpp_headers/lvol.o 00:03:03.832 CXX test/cpp_headers/log.o 00:03:03.832 CXX test/cpp_headers/mmio.o 00:03:03.832 CXX test/cpp_headers/net.o 00:03:03.832 CXX test/cpp_headers/memory.o 00:03:03.832 CXX test/cpp_headers/nbd.o 00:03:03.832 CXX test/cpp_headers/notify.o 00:03:03.832 CXX test/cpp_headers/nvme_intel.o 00:03:03.832 CXX test/cpp_headers/nvme.o 00:03:03.832 CXX test/cpp_headers/nvme_ocssd.o 00:03:03.832 CXX test/cpp_headers/nvme_spec.o 00:03:03.832 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:03.832 CXX test/cpp_headers/nvmf_cmd.o 00:03:03.832 CXX test/cpp_headers/nvme_zns.o 00:03:03.832 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:03.832 CXX test/cpp_headers/nvmf.o 00:03:03.832 CXX test/cpp_headers/nvmf_spec.o 00:03:03.832 CXX test/cpp_headers/nvmf_transport.o 00:03:03.832 CXX test/cpp_headers/opal.o 00:03:03.832 CXX test/cpp_headers/opal_spec.o 00:03:03.832 CC examples/ioat/verify/verify.o 00:03:03.832 CC test/app/histogram_perf/histogram_perf.o 00:03:03.832 CC examples/ioat/perf/perf.o 00:03:03.832 CC test/app/stub/stub.o 00:03:03.832 CC app/fio/nvme/fio_plugin.o 00:03:03.832 CC test/env/pci/pci_ut.o 00:03:03.832 CC examples/util/zipf/zipf.o 00:03:03.832 CC test/thread/poller_perf/poller_perf.o 00:03:03.832 CC test/env/vtophys/vtophys.o 00:03:03.832 CXX test/cpp_headers/pci_ids.o 00:03:03.832 CC test/env/memory/memory_ut.o 00:03:03.832 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:03.832 CC test/app/jsoncat/jsoncat.o 00:03:03.832 CC app/fio/bdev/fio_plugin.o 00:03:03.832 CC test/app/bdev_svc/bdev_svc.o 00:03:03.832 CC test/dma/test_dma/test_dma.o 00:03:04.124 LINK spdk_lspci 00:03:04.124 LINK rpc_client_test 00:03:04.124 LINK interrupt_tgt 00:03:04.124 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.425 LINK spdk_nvme_discover 00:03:04.425 LINK histogram_perf 00:03:04.425 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.425 LINK poller_perf 00:03:04.425 CXX test/cpp_headers/pipe.o 00:03:04.425 CXX test/cpp_headers/queue.o 00:03:04.425 LINK stub 00:03:04.425 CXX test/cpp_headers/reduce.o 00:03:04.425 LINK nvmf_tgt 00:03:04.425 CXX test/cpp_headers/rpc.o 00:03:04.425 CXX test/cpp_headers/scheduler.o 00:03:04.425 CXX test/cpp_headers/scsi.o 00:03:04.425 CXX test/cpp_headers/scsi_spec.o 00:03:04.425 CXX test/cpp_headers/sock.o 00:03:04.425 CXX test/cpp_headers/stdinc.o 00:03:04.425 CXX test/cpp_headers/string.o 00:03:04.425 CXX test/cpp_headers/thread.o 00:03:04.425 CXX test/cpp_headers/trace.o 00:03:04.425 CXX test/cpp_headers/trace_parser.o 00:03:04.425 CXX test/cpp_headers/tree.o 00:03:04.425 CXX test/cpp_headers/ublk.o 00:03:04.425 LINK vtophys 00:03:04.425 CXX test/cpp_headers/util.o 00:03:04.425 CXX test/cpp_headers/uuid.o 00:03:04.425 LINK jsoncat 00:03:04.425 CXX test/cpp_headers/version.o 00:03:04.425 CXX test/cpp_headers/vfio_user_pci.o 00:03:04.425 CXX test/cpp_headers/vfio_user_spec.o 00:03:04.425 CXX test/cpp_headers/vhost.o 00:03:04.425 CXX test/cpp_headers/vmd.o 00:03:04.425 CXX test/cpp_headers/xor.o 00:03:04.425 LINK iscsi_tgt 00:03:04.425 LINK zipf 00:03:04.425 CXX test/cpp_headers/zipf.o 00:03:04.425 LINK spdk_trace_record 00:03:04.425 LINK spdk_tgt 00:03:04.425 LINK env_dpdk_post_init 00:03:04.425 LINK ioat_perf 00:03:04.425 LINK verify 00:03:04.425 LINK bdev_svc 00:03:04.425 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.425 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.425 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.741 LINK spdk_dd 00:03:04.741 LINK spdk_trace 00:03:04.741 LINK pci_ut 00:03:04.741 LINK nvme_fuzz 00:03:04.741 LINK spdk_nvme 00:03:04.999 CC test/event/event_perf/event_perf.o 00:03:04.999 CC test/event/reactor_perf/reactor_perf.o 00:03:04.999 CC test/event/app_repeat/app_repeat.o 00:03:04.999 CC test/event/reactor/reactor.o 00:03:04.999 CC test/event/scheduler/scheduler.o 00:03:04.999 CC examples/vmd/led/led.o 00:03:04.999 LINK test_dma 00:03:04.999 CC examples/idxd/perf/perf.o 00:03:04.999 CC examples/sock/hello_world/hello_sock.o 00:03:04.999 CC examples/vmd/lsvmd/lsvmd.o 00:03:04.999 CC examples/thread/thread/thread_ex.o 00:03:04.999 LINK spdk_bdev 00:03:04.999 LINK mem_callbacks 00:03:04.999 LINK reactor_perf 00:03:04.999 LINK event_perf 00:03:04.999 LINK reactor 00:03:04.999 LINK app_repeat 00:03:04.999 LINK vhost_fuzz 00:03:04.999 LINK led 00:03:04.999 CC app/vhost/vhost.o 00:03:04.999 LINK lsvmd 00:03:05.257 LINK spdk_top 00:03:05.257 LINK scheduler 00:03:05.257 LINK spdk_nvme_perf 00:03:05.257 LINK spdk_nvme_identify 00:03:05.257 LINK hello_sock 00:03:05.257 LINK thread 00:03:05.257 LINK idxd_perf 00:03:05.257 LINK vhost 00:03:05.516 CC test/nvme/compliance/nvme_compliance.o 00:03:05.516 CC test/nvme/sgl/sgl.o 00:03:05.516 CC test/nvme/startup/startup.o 00:03:05.516 CC test/nvme/err_injection/err_injection.o 00:03:05.516 CC test/nvme/aer/aer.o 00:03:05.516 CC test/nvme/e2edp/nvme_dp.o 00:03:05.516 CC test/nvme/cuse/cuse.o 00:03:05.516 CC test/nvme/boot_partition/boot_partition.o 00:03:05.516 CC test/nvme/connect_stress/connect_stress.o 00:03:05.516 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:05.516 CC test/nvme/simple_copy/simple_copy.o 00:03:05.516 CC test/nvme/reset/reset.o 00:03:05.516 CC test/nvme/overhead/overhead.o 00:03:05.516 CC test/nvme/reserve/reserve.o 00:03:05.516 CC test/blobfs/mkfs/mkfs.o 00:03:05.516 CC test/nvme/fused_ordering/fused_ordering.o 00:03:05.516 CC test/nvme/fdp/fdp.o 00:03:05.516 CC test/accel/dif/dif.o 00:03:05.516 LINK memory_ut 00:03:05.516 CC test/lvol/esnap/esnap.o 00:03:05.516 LINK boot_partition 00:03:05.516 LINK startup 00:03:05.516 LINK err_injection 00:03:05.773 LINK connect_stress 00:03:05.773 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:05.773 CC examples/nvme/arbitration/arbitration.o 00:03:05.773 CC examples/nvme/hello_world/hello_world.o 00:03:05.773 LINK doorbell_aers 00:03:05.773 CC examples/nvme/hotplug/hotplug.o 00:03:05.773 CC examples/nvme/abort/abort.o 00:03:05.773 CC examples/nvme/reconnect/reconnect.o 00:03:05.773 LINK fused_ordering 00:03:05.773 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:05.773 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:05.773 LINK mkfs 00:03:05.773 LINK reserve 00:03:05.773 LINK simple_copy 00:03:05.773 LINK sgl 00:03:05.773 LINK reset 00:03:05.773 LINK aer 00:03:05.773 LINK nvme_dp 00:03:05.773 CC examples/accel/perf/accel_perf.o 00:03:05.773 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:05.773 CC examples/blob/hello_world/hello_blob.o 00:03:05.773 CC examples/blob/cli/blobcli.o 00:03:05.773 LINK overhead 00:03:05.773 LINK nvme_compliance 00:03:05.773 LINK fdp 00:03:05.773 LINK pmr_persistence 00:03:05.773 LINK cmb_copy 00:03:05.773 LINK hello_world 00:03:06.032 LINK hotplug 00:03:06.032 LINK arbitration 00:03:06.032 LINK reconnect 00:03:06.032 LINK hello_blob 00:03:06.032 LINK hello_fsdev 00:03:06.032 LINK abort 00:03:06.290 LINK nvme_manage 00:03:06.290 LINK dif 00:03:06.290 LINK blobcli 00:03:06.290 LINK accel_perf 00:03:06.290 LINK iscsi_fuzz 00:03:06.548 LINK cuse 00:03:06.806 CC test/bdev/bdevio/bdevio.o 00:03:06.806 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.806 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.064 LINK hello_bdev 00:03:07.065 LINK bdevio 00:03:07.628 LINK bdevperf 00:03:07.886 CC examples/nvmf/nvmf/nvmf.o 00:03:08.144 LINK nvmf 00:03:10.690 LINK esnap 00:03:10.690 00:03:10.690 real 0m59.049s 00:03:10.690 user 8m50.609s 00:03:10.690 sys 3m34.683s 00:03:10.690 12:06:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:10.690 12:06:17 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.690 ************************************ 00:03:10.690 END TEST make 00:03:10.690 ************************************ 00:03:10.690 12:06:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.690 12:06:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.690 12:06:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.690 12:06:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.690 12:06:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.690 12:06:17 -- pm/common@44 -- $ pid=3361252 00:03:10.690 12:06:17 -- pm/common@50 -- $ kill -TERM 3361252 00:03:10.690 12:06:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.690 12:06:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.690 12:06:17 -- pm/common@44 -- $ pid=3361253 00:03:10.690 12:06:17 -- pm/common@50 -- $ kill -TERM 3361253 00:03:10.690 12:06:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.690 12:06:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:10.690 12:06:17 -- pm/common@44 -- $ pid=3361255 00:03:10.690 12:06:17 -- pm/common@50 -- $ kill -TERM 3361255 00:03:10.690 12:06:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.690 12:06:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:10.690 12:06:17 -- pm/common@44 -- $ pid=3361278 00:03:10.690 12:06:17 -- pm/common@50 -- $ sudo -E kill -TERM 3361278 00:03:10.690 12:06:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:10.690 12:06:17 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:10.690 12:06:17 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:10.690 12:06:17 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:10.690 12:06:17 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:10.949 12:06:17 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:10.949 12:06:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.949 12:06:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.949 12:06:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.949 12:06:17 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.949 12:06:17 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.949 12:06:17 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.949 12:06:17 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.949 12:06:17 -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.949 12:06:17 -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.949 12:06:17 -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.949 12:06:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.949 12:06:17 -- scripts/common.sh@344 -- # case "$op" in 00:03:10.949 12:06:17 -- scripts/common.sh@345 -- # : 1 00:03:10.949 12:06:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.949 12:06:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.949 12:06:17 -- scripts/common.sh@365 -- # decimal 1 00:03:10.949 12:06:17 -- scripts/common.sh@353 -- # local d=1 00:03:10.949 12:06:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.949 12:06:17 -- scripts/common.sh@355 -- # echo 1 00:03:10.949 12:06:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.949 12:06:17 -- scripts/common.sh@366 -- # decimal 2 00:03:10.949 12:06:17 -- scripts/common.sh@353 -- # local d=2 00:03:10.949 12:06:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.949 12:06:17 -- scripts/common.sh@355 -- # echo 2 00:03:10.949 12:06:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.949 12:06:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.949 12:06:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.949 12:06:17 -- scripts/common.sh@368 -- # return 0 00:03:10.949 12:06:17 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.949 12:06:17 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.949 --rc genhtml_branch_coverage=1 00:03:10.949 --rc genhtml_function_coverage=1 00:03:10.949 --rc genhtml_legend=1 00:03:10.949 --rc geninfo_all_blocks=1 00:03:10.949 --rc geninfo_unexecuted_blocks=1 00:03:10.949 00:03:10.949 ' 00:03:10.949 12:06:17 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.949 --rc genhtml_branch_coverage=1 00:03:10.949 --rc genhtml_function_coverage=1 00:03:10.949 --rc genhtml_legend=1 00:03:10.949 --rc geninfo_all_blocks=1 00:03:10.949 --rc geninfo_unexecuted_blocks=1 00:03:10.949 00:03:10.949 ' 00:03:10.949 12:06:17 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.949 --rc genhtml_branch_coverage=1 00:03:10.949 --rc genhtml_function_coverage=1 00:03:10.949 --rc genhtml_legend=1 00:03:10.949 --rc geninfo_all_blocks=1 00:03:10.949 --rc geninfo_unexecuted_blocks=1 00:03:10.949 00:03:10.949 ' 00:03:10.949 12:06:17 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.949 --rc genhtml_branch_coverage=1 00:03:10.949 --rc genhtml_function_coverage=1 00:03:10.949 --rc genhtml_legend=1 00:03:10.949 --rc geninfo_all_blocks=1 00:03:10.949 --rc geninfo_unexecuted_blocks=1 00:03:10.949 00:03:10.949 ' 00:03:10.949 12:06:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:10.949 12:06:17 -- nvmf/common.sh@7 -- # uname -s 00:03:10.949 12:06:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.949 12:06:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.949 12:06:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.949 12:06:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.949 12:06:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.949 12:06:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.949 12:06:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.949 12:06:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.949 12:06:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.949 12:06:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.949 12:06:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:10.949 12:06:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:10.949 12:06:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.949 12:06:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.949 12:06:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:10.949 12:06:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.949 12:06:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:10.949 12:06:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:10.949 12:06:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.949 12:06:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.949 12:06:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.949 12:06:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.949 12:06:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.949 12:06:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.950 12:06:17 -- paths/export.sh@5 -- # export PATH 00:03:10.950 12:06:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.950 12:06:17 -- nvmf/common.sh@51 -- # : 0 00:03:10.950 12:06:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:10.950 12:06:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:10.950 12:06:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:10.950 12:06:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.950 12:06:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.950 12:06:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:10.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:10.950 12:06:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:10.950 12:06:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:10.950 12:06:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:10.950 12:06:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.950 12:06:17 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.950 12:06:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.950 12:06:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.950 12:06:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.950 12:06:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.950 12:06:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.950 12:06:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.950 12:06:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.950 12:06:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.950 12:06:17 -- spdk/autotest.sh@48 -- # udevadm_pid=3423835 00:03:10.950 12:06:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.950 12:06:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.950 12:06:17 -- pm/common@17 -- # local monitor 00:03:10.950 12:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.950 12:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.950 12:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.950 12:06:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.950 12:06:17 -- pm/common@25 -- # sleep 1 00:03:10.950 12:06:17 -- pm/common@21 -- # date +%s 00:03:10.950 12:06:17 -- pm/common@21 -- # date +%s 00:03:10.950 12:06:17 -- pm/common@21 -- # date +%s 00:03:10.950 12:06:17 -- pm/common@21 -- # date +%s 00:03:10.950 12:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733828777 00:03:10.950 12:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733828777 00:03:10.950 12:06:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733828777 00:03:10.950 12:06:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733828777 00:03:10.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733828777_collect-cpu-load.pm.log 00:03:10.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733828777_collect-vmstat.pm.log 00:03:10.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733828777_collect-cpu-temp.pm.log 00:03:10.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733828777_collect-bmc-pm.bmc.pm.log 00:03:11.885 12:06:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.885 12:06:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.885 12:06:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:11.885 12:06:18 -- common/autotest_common.sh@10 -- # set +x 00:03:11.885 12:06:18 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.885 12:06:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:11.885 12:06:18 -- common/autotest_common.sh@10 -- # set +x 00:03:11.885 12:06:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:11.885 12:06:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.885 12:06:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.885 12:06:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:11.885 12:06:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.885 12:06:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.885 12:06:18 -- common/autotest_common.sh@1457 -- # uname 00:03:11.885 12:06:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:11.885 12:06:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.885 12:06:18 -- common/autotest_common.sh@1477 -- # uname 00:03:11.885 12:06:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:11.885 12:06:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:11.885 12:06:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.143 lcov: LCOV version 1.15 00:03:12.143 12:06:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:22.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:22.113 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:34.312 12:06:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:34.312 12:06:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.312 12:06:40 -- common/autotest_common.sh@10 -- # set +x 00:03:34.312 12:06:40 -- spdk/autotest.sh@78 -- # rm -f 00:03:34.312 12:06:40 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.211 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:36.211 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:36.211 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:36.211 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:36.211 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:36.211 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:36.211 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:36.470 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:36.470 12:06:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:36.470 12:06:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:36.470 12:06:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:36.470 12:06:43 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:36.470 12:06:43 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:36.470 12:06:43 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:36.470 12:06:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:36.470 12:06:43 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:36.470 12:06:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:36.470 12:06:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:36.470 12:06:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:36.470 12:06:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.470 12:06:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:36.470 12:06:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:36.470 12:06:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.470 12:06:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:36.470 12:06:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:36.470 12:06:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:36.470 12:06:43 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.728 No valid GPT data, bailing 00:03:36.728 12:06:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.728 12:06:43 -- scripts/common.sh@394 -- # pt= 00:03:36.728 12:06:43 -- scripts/common.sh@395 -- # return 1 00:03:36.728 12:06:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.728 1+0 records in 00:03:36.728 1+0 records out 00:03:36.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0057158 s, 183 MB/s 00:03:36.728 12:06:43 -- spdk/autotest.sh@105 -- # sync 00:03:36.728 12:06:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.728 12:06:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.728 12:06:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:40.917 12:06:47 -- spdk/autotest.sh@111 -- # uname -s 00:03:40.917 12:06:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:40.917 12:06:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:40.917 12:06:47 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:43.447 Hugepages 00:03:43.447 node hugesize free / total 00:03:43.447 node0 1048576kB 0 / 0 00:03:43.447 node0 2048kB 0 / 0 00:03:43.447 node1 1048576kB 0 / 0 00:03:43.447 node1 2048kB 0 / 0 00:03:43.447 00:03:43.447 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.447 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:43.447 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:43.447 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:43.447 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:43.447 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:43.447 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:43.447 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:43.447 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:43.447 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:43.447 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:43.447 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:43.447 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:43.447 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:43.447 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:43.447 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:43.447 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:43.447 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:43.447 12:06:50 -- spdk/autotest.sh@117 -- # uname -s 00:03:43.447 12:06:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:43.447 12:06:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:43.447 12:06:50 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.981 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.981 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.917 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:46.917 12:06:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:47.854 12:06:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:47.855 12:06:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:47.855 12:06:54 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:47.855 12:06:54 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:47.855 12:06:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:47.855 12:06:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:47.855 12:06:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.113 12:06:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:48.113 12:06:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:48.113 12:06:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:48.113 12:06:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:48.113 12:06:54 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.647 Waiting for block devices as requested 00:03:50.647 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:50.647 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:50.647 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:50.906 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:50.906 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:50.906 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:51.166 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:51.166 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:51.166 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:51.166 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:51.424 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:51.424 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:51.424 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:51.683 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:51.683 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:51.683 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:51.683 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:51.942 12:06:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.942 12:06:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:51.942 12:06:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:51.942 12:06:58 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:51.942 12:06:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:51.942 12:06:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:51.942 12:06:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:51.942 12:06:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:51.942 12:06:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:51.942 12:06:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:51.942 12:06:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:51.942 12:06:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.942 12:06:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.942 12:06:58 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:51.942 12:06:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.942 12:06:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.942 12:06:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:51.942 12:06:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.942 12:06:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.942 12:06:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.942 12:06:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.942 12:06:58 -- common/autotest_common.sh@1543 -- # continue 00:03:51.942 12:06:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:51.942 12:06:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.942 12:06:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.942 12:06:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:51.942 12:06:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.942 12:06:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.942 12:06:58 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.474 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:54.474 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:54.475 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:54.475 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:54.475 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:54.475 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:54.475 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:54.475 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:55.411 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.411 12:07:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:55.411 12:07:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.411 12:07:02 -- common/autotest_common.sh@10 -- # set +x 00:03:55.411 12:07:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:55.411 12:07:02 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:55.411 12:07:02 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.411 12:07:02 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:55.411 12:07:02 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:55.411 12:07:02 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:55.411 12:07:02 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:55.411 12:07:02 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:55.411 12:07:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:55.411 12:07:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:55.411 12:07:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.411 12:07:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.411 12:07:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:55.670 12:07:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:55.670 12:07:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:55.670 12:07:02 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:55.670 12:07:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:55.670 12:07:02 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:55.670 12:07:02 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:55.670 12:07:02 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:55.670 12:07:02 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:55.670 12:07:02 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:55.670 12:07:02 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:55.670 12:07:02 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3437058 00:03:55.670 12:07:02 -- common/autotest_common.sh@1585 -- # waitforlisten 3437058 00:03:55.670 12:07:02 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.670 12:07:02 -- common/autotest_common.sh@835 -- # '[' -z 3437058 ']' 00:03:55.670 12:07:02 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.670 12:07:02 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.670 12:07:02 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.670 12:07:02 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.670 12:07:02 -- common/autotest_common.sh@10 -- # set +x 00:03:55.670 [2024-12-10 12:07:02.370779] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:03:55.670 [2024-12-10 12:07:02.370870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437058 ] 00:03:55.670 [2024-12-10 12:07:02.481805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.930 [2024-12-10 12:07:02.581725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.865 12:07:03 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:56.865 12:07:03 -- common/autotest_common.sh@868 -- # return 0 00:03:56.865 12:07:03 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:56.865 12:07:03 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:56.865 12:07:03 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:00.153 nvme0n1 00:04:00.153 12:07:06 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:00.153 [2024-12-10 12:07:06.598659] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:00.153 [2024-12-10 12:07:06.598707] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:00.153 request: 00:04:00.153 { 00:04:00.153 "nvme_ctrlr_name": "nvme0", 00:04:00.153 "password": "test", 00:04:00.153 "method": "bdev_nvme_opal_revert", 00:04:00.153 "req_id": 1 00:04:00.153 } 00:04:00.153 Got JSON-RPC error response 00:04:00.153 response: 00:04:00.153 { 00:04:00.153 "code": -32603, 00:04:00.153 "message": "Internal error" 00:04:00.153 } 00:04:00.153 12:07:06 -- common/autotest_common.sh@1591 -- # true 00:04:00.153 12:07:06 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:00.153 12:07:06 -- common/autotest_common.sh@1595 -- # killprocess 3437058 00:04:00.153 12:07:06 -- common/autotest_common.sh@954 -- # '[' -z 3437058 ']' 00:04:00.153 12:07:06 -- common/autotest_common.sh@958 -- # kill -0 3437058 00:04:00.153 12:07:06 -- common/autotest_common.sh@959 -- # uname 00:04:00.153 12:07:06 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.153 12:07:06 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3437058 00:04:00.153 12:07:06 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.153 12:07:06 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.153 12:07:06 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3437058' 00:04:00.153 killing process with pid 3437058 00:04:00.153 12:07:06 -- common/autotest_common.sh@973 -- # kill 3437058 00:04:00.153 12:07:06 -- common/autotest_common.sh@978 -- # wait 3437058 00:04:03.437 12:07:10 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:03.437 12:07:10 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:03.437 12:07:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.437 12:07:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.437 12:07:10 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:03.437 12:07:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.437 12:07:10 -- common/autotest_common.sh@10 -- # set +x 00:04:03.437 12:07:10 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:03.437 12:07:10 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.437 12:07:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.437 12:07:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.437 12:07:10 -- common/autotest_common.sh@10 -- # set +x 00:04:03.437 ************************************ 00:04:03.437 START TEST env 00:04:03.437 ************************************ 00:04:03.437 12:07:10 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.695 * Looking for test storage... 00:04:03.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:03.695 12:07:10 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:03.696 12:07:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.696 12:07:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.696 12:07:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.696 12:07:10 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.696 12:07:10 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.696 12:07:10 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.696 12:07:10 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.696 12:07:10 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.696 12:07:10 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.696 12:07:10 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.696 12:07:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.696 12:07:10 env -- scripts/common.sh@344 -- # case "$op" in 00:04:03.696 12:07:10 env -- scripts/common.sh@345 -- # : 1 00:04:03.696 12:07:10 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.696 12:07:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.696 12:07:10 env -- scripts/common.sh@365 -- # decimal 1 00:04:03.696 12:07:10 env -- scripts/common.sh@353 -- # local d=1 00:04:03.696 12:07:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.696 12:07:10 env -- scripts/common.sh@355 -- # echo 1 00:04:03.696 12:07:10 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.696 12:07:10 env -- scripts/common.sh@366 -- # decimal 2 00:04:03.696 12:07:10 env -- scripts/common.sh@353 -- # local d=2 00:04:03.696 12:07:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.696 12:07:10 env -- scripts/common.sh@355 -- # echo 2 00:04:03.696 12:07:10 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.696 12:07:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.696 12:07:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.696 12:07:10 env -- scripts/common.sh@368 -- # return 0 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:03.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.696 --rc genhtml_branch_coverage=1 00:04:03.696 --rc genhtml_function_coverage=1 00:04:03.696 --rc genhtml_legend=1 00:04:03.696 --rc geninfo_all_blocks=1 00:04:03.696 --rc geninfo_unexecuted_blocks=1 00:04:03.696 00:04:03.696 ' 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:03.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.696 --rc genhtml_branch_coverage=1 00:04:03.696 --rc genhtml_function_coverage=1 00:04:03.696 --rc genhtml_legend=1 00:04:03.696 --rc geninfo_all_blocks=1 00:04:03.696 --rc geninfo_unexecuted_blocks=1 00:04:03.696 00:04:03.696 ' 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:03.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.696 --rc genhtml_branch_coverage=1 00:04:03.696 --rc genhtml_function_coverage=1 00:04:03.696 --rc genhtml_legend=1 00:04:03.696 --rc geninfo_all_blocks=1 00:04:03.696 --rc geninfo_unexecuted_blocks=1 00:04:03.696 00:04:03.696 ' 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:03.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.696 --rc genhtml_branch_coverage=1 00:04:03.696 --rc genhtml_function_coverage=1 00:04:03.696 --rc genhtml_legend=1 00:04:03.696 --rc geninfo_all_blocks=1 00:04:03.696 --rc geninfo_unexecuted_blocks=1 00:04:03.696 00:04:03.696 ' 00:04:03.696 12:07:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.696 12:07:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.696 12:07:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.696 ************************************ 00:04:03.696 START TEST env_memory 00:04:03.696 ************************************ 00:04:03.696 12:07:10 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.696 00:04:03.696 00:04:03.696 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.696 http://cunit.sourceforge.net/ 00:04:03.696 00:04:03.696 00:04:03.696 Suite: memory 00:04:03.696 Test: alloc and free memory map ...[2024-12-10 12:07:10.457914] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.696 passed 00:04:03.696 Test: mem map translation ...[2024-12-10 12:07:10.503451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.696 [2024-12-10 12:07:10.503476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.696 [2024-12-10 12:07:10.503547] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.696 [2024-12-10 12:07:10.503564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.955 passed 00:04:03.955 Test: mem map registration ...[2024-12-10 12:07:10.567955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:03.955 [2024-12-10 12:07:10.567976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:03.955 passed 00:04:03.955 Test: mem map adjacent registrations ...passed 00:04:03.955 00:04:03.955 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.955 suites 1 1 n/a 0 0 00:04:03.955 tests 4 4 4 0 0 00:04:03.955 asserts 152 152 152 0 n/a 00:04:03.955 00:04:03.955 Elapsed time = 0.234 seconds 00:04:03.955 00:04:03.955 real 0m0.269s 00:04:03.955 user 0m0.245s 00:04:03.955 sys 0m0.023s 00:04:03.955 12:07:10 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.955 12:07:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:03.955 ************************************ 00:04:03.955 END TEST env_memory 00:04:03.955 ************************************ 00:04:03.955 12:07:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.955 12:07:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.955 12:07:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.955 12:07:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.955 ************************************ 00:04:03.955 START TEST env_vtophys 00:04:03.955 ************************************ 00:04:03.955 12:07:10 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.955 EAL: lib.eal log level changed from notice to debug 00:04:03.955 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.955 EAL: Detected lcore 1 as core 1 on socket 0 00:04:03.955 EAL: Detected lcore 2 as core 2 on socket 0 00:04:03.955 EAL: Detected lcore 3 as core 3 on socket 0 00:04:03.955 EAL: Detected lcore 4 as core 4 on socket 0 00:04:03.955 EAL: Detected lcore 5 as core 5 on socket 0 00:04:03.955 EAL: Detected lcore 6 as core 6 on socket 0 00:04:03.955 EAL: Detected lcore 7 as core 8 on socket 0 00:04:03.955 EAL: Detected lcore 8 as core 9 on socket 0 00:04:03.955 EAL: Detected lcore 9 as core 10 on socket 0 00:04:03.955 EAL: Detected lcore 10 as core 11 on socket 0 00:04:03.955 EAL: Detected lcore 11 as core 12 on socket 0 00:04:03.955 EAL: Detected lcore 12 as core 13 on socket 0 00:04:03.955 EAL: Detected lcore 13 as core 16 on socket 0 00:04:03.955 EAL: Detected lcore 14 as core 17 on socket 0 00:04:03.955 EAL: Detected lcore 15 as core 18 on socket 0 00:04:03.955 EAL: Detected lcore 16 as core 19 on socket 0 00:04:03.955 EAL: Detected lcore 17 as core 20 on socket 0 00:04:03.955 EAL: Detected lcore 18 as core 21 on socket 0 00:04:03.955 EAL: Detected lcore 19 as core 25 on socket 0 00:04:03.955 EAL: Detected lcore 20 as core 26 on socket 0 00:04:03.955 EAL: Detected lcore 21 as core 27 on socket 0 00:04:03.955 EAL: Detected lcore 22 as core 28 on socket 0 00:04:03.955 EAL: Detected lcore 23 as core 29 on socket 0 00:04:03.955 EAL: Detected lcore 24 as core 0 on socket 1 00:04:03.955 EAL: Detected lcore 25 as core 1 on socket 1 00:04:03.955 EAL: Detected lcore 26 as core 2 on socket 1 00:04:03.955 EAL: Detected lcore 27 as core 3 on socket 1 00:04:03.955 EAL: Detected lcore 28 as core 4 on socket 1 00:04:03.955 EAL: Detected lcore 29 as core 5 on socket 1 00:04:03.955 EAL: Detected lcore 30 as core 6 on socket 1 00:04:03.955 EAL: Detected lcore 31 as core 8 on socket 1 00:04:03.955 EAL: Detected lcore 32 as core 9 on socket 1 00:04:03.955 EAL: Detected lcore 33 as core 10 on socket 1 00:04:03.955 EAL: Detected lcore 34 as core 11 on socket 1 00:04:03.955 EAL: Detected lcore 35 as core 12 on socket 1 00:04:03.955 EAL: Detected lcore 36 as core 13 on socket 1 00:04:03.955 EAL: Detected lcore 37 as core 16 on socket 1 00:04:03.955 EAL: Detected lcore 38 as core 17 on socket 1 00:04:03.955 EAL: Detected lcore 39 as core 18 on socket 1 00:04:03.955 EAL: Detected lcore 40 as core 19 on socket 1 00:04:03.955 EAL: Detected lcore 41 as core 20 on socket 1 00:04:03.955 EAL: Detected lcore 42 as core 21 on socket 1 00:04:03.955 EAL: Detected lcore 43 as core 25 on socket 1 00:04:03.955 EAL: Detected lcore 44 as core 26 on socket 1 00:04:03.955 EAL: Detected lcore 45 as core 27 on socket 1 00:04:03.955 EAL: Detected lcore 46 as core 28 on socket 1 00:04:03.955 EAL: Detected lcore 47 as core 29 on socket 1 00:04:03.955 EAL: Detected lcore 48 as core 0 on socket 0 00:04:03.955 EAL: Detected lcore 49 as core 1 on socket 0 00:04:03.955 EAL: Detected lcore 50 as core 2 on socket 0 00:04:03.955 EAL: Detected lcore 51 as core 3 on socket 0 00:04:03.955 EAL: Detected lcore 52 as core 4 on socket 0 00:04:03.955 EAL: Detected lcore 53 as core 5 on socket 0 00:04:03.955 EAL: Detected lcore 54 as core 6 on socket 0 00:04:03.955 EAL: Detected lcore 55 as core 8 on socket 0 00:04:03.955 EAL: Detected lcore 56 as core 9 on socket 0 00:04:03.955 EAL: Detected lcore 57 as core 10 on socket 0 00:04:03.955 EAL: Detected lcore 58 as core 11 on socket 0 00:04:03.955 EAL: Detected lcore 59 as core 12 on socket 0 00:04:03.955 EAL: Detected lcore 60 as core 13 on socket 0 00:04:03.955 EAL: Detected lcore 61 as core 16 on socket 0 00:04:03.955 EAL: Detected lcore 62 as core 17 on socket 0 00:04:03.955 EAL: Detected lcore 63 as core 18 on socket 0 00:04:03.955 EAL: Detected lcore 64 as core 19 on socket 0 00:04:03.955 EAL: Detected lcore 65 as core 20 on socket 0 00:04:03.955 EAL: Detected lcore 66 as core 21 on socket 0 00:04:03.955 EAL: Detected lcore 67 as core 25 on socket 0 00:04:03.955 EAL: Detected lcore 68 as core 26 on socket 0 00:04:03.955 EAL: Detected lcore 69 as core 27 on socket 0 00:04:03.955 EAL: Detected lcore 70 as core 28 on socket 0 00:04:03.955 EAL: Detected lcore 71 as core 29 on socket 0 00:04:03.955 EAL: Detected lcore 72 as core 0 on socket 1 00:04:03.955 EAL: Detected lcore 73 as core 1 on socket 1 00:04:03.955 EAL: Detected lcore 74 as core 2 on socket 1 00:04:03.955 EAL: Detected lcore 75 as core 3 on socket 1 00:04:03.956 EAL: Detected lcore 76 as core 4 on socket 1 00:04:03.956 EAL: Detected lcore 77 as core 5 on socket 1 00:04:03.956 EAL: Detected lcore 78 as core 6 on socket 1 00:04:03.956 EAL: Detected lcore 79 as core 8 on socket 1 00:04:03.956 EAL: Detected lcore 80 as core 9 on socket 1 00:04:03.956 EAL: Detected lcore 81 as core 10 on socket 1 00:04:03.956 EAL: Detected lcore 82 as core 11 on socket 1 00:04:03.956 EAL: Detected lcore 83 as core 12 on socket 1 00:04:03.956 EAL: Detected lcore 84 as core 13 on socket 1 00:04:03.956 EAL: Detected lcore 85 as core 16 on socket 1 00:04:03.956 EAL: Detected lcore 86 as core 17 on socket 1 00:04:03.956 EAL: Detected lcore 87 as core 18 on socket 1 00:04:03.956 EAL: Detected lcore 88 as core 19 on socket 1 00:04:03.956 EAL: Detected lcore 89 as core 20 on socket 1 00:04:03.956 EAL: Detected lcore 90 as core 21 on socket 1 00:04:03.956 EAL: Detected lcore 91 as core 25 on socket 1 00:04:03.956 EAL: Detected lcore 92 as core 26 on socket 1 00:04:03.956 EAL: Detected lcore 93 as core 27 on socket 1 00:04:03.956 EAL: Detected lcore 94 as core 28 on socket 1 00:04:03.956 EAL: Detected lcore 95 as core 29 on socket 1 00:04:03.956 EAL: Maximum logical cores by configuration: 128 00:04:03.956 EAL: Detected CPU lcores: 96 00:04:03.956 EAL: Detected NUMA nodes: 2 00:04:03.956 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:03.956 EAL: Detected shared linkage of DPDK 00:04:03.956 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.226 EAL: Bus pci wants IOVA as 'DC' 00:04:04.226 EAL: Buses did not request a specific IOVA mode. 00:04:04.226 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:04.226 EAL: Selected IOVA mode 'VA' 00:04:04.226 EAL: Probing VFIO support... 00:04:04.226 EAL: IOMMU type 1 (Type 1) is supported 00:04:04.226 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:04.226 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:04.226 EAL: VFIO support initialized 00:04:04.226 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.226 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.226 EAL: Setting up physically contiguous memory... 00:04:04.226 EAL: Setting maximum number of open files to 524288 00:04:04.226 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.226 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:04.226 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.226 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.226 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.226 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.226 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.226 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.226 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.226 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.226 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.226 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.226 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.226 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.226 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.226 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.226 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.226 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.226 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.226 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.226 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.226 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.226 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.226 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:04.226 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.226 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:04.226 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.226 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.226 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:04.226 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:04.226 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.226 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:04.226 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.226 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.226 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:04.226 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:04.226 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.226 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:04.226 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.226 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.226 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:04.226 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:04.226 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.226 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:04.226 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.226 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.226 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:04.226 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:04.226 EAL: Hugepages will be freed exactly as allocated. 00:04:04.226 EAL: No shared files mode enabled, IPC is disabled 00:04:04.226 EAL: No shared files mode enabled, IPC is disabled 00:04:04.226 EAL: TSC frequency is ~2100000 KHz 00:04:04.226 EAL: Main lcore 0 is ready (tid=7f8e80708a40;cpuset=[0]) 00:04:04.226 EAL: Trying to obtain current memory policy. 00:04:04.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.226 EAL: Restoring previous memory policy: 0 00:04:04.226 EAL: request: mp_malloc_sync 00:04:04.226 EAL: No shared files mode enabled, IPC is disabled 00:04:04.226 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.226 EAL: No shared files mode enabled, IPC is disabled 00:04:04.226 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.226 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.226 00:04:04.226 00:04:04.226 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.226 http://cunit.sourceforge.net/ 00:04:04.226 00:04:04.226 00:04:04.226 Suite: components_suite 00:04:04.545 Test: vtophys_malloc_test ...passed 00:04:04.545 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.545 EAL: Restoring previous memory policy: 4 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.545 EAL: Trying to obtain current memory policy. 00:04:04.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.545 EAL: Restoring previous memory policy: 4 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.545 EAL: Trying to obtain current memory policy. 00:04:04.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.545 EAL: Restoring previous memory policy: 4 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.545 EAL: Trying to obtain current memory policy. 00:04:04.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.545 EAL: Restoring previous memory policy: 4 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.545 EAL: Trying to obtain current memory policy. 00:04:04.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.545 EAL: Restoring previous memory policy: 4 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.545 EAL: request: mp_malloc_sync 00:04:04.545 EAL: No shared files mode enabled, IPC is disabled 00:04:04.545 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.841 EAL: request: mp_malloc_sync 00:04:04.841 EAL: No shared files mode enabled, IPC is disabled 00:04:04.841 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.841 EAL: Trying to obtain current memory policy. 00:04:04.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.841 EAL: Restoring previous memory policy: 4 00:04:04.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.841 EAL: request: mp_malloc_sync 00:04:04.841 EAL: No shared files mode enabled, IPC is disabled 00:04:04.841 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.841 EAL: request: mp_malloc_sync 00:04:04.841 EAL: No shared files mode enabled, IPC is disabled 00:04:04.841 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.841 EAL: Trying to obtain current memory policy. 00:04:04.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.100 EAL: Restoring previous memory policy: 4 00:04:05.100 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.100 EAL: request: mp_malloc_sync 00:04:05.100 EAL: No shared files mode enabled, IPC is disabled 00:04:05.100 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.100 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.100 EAL: request: mp_malloc_sync 00:04:05.100 EAL: No shared files mode enabled, IPC is disabled 00:04:05.100 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.360 EAL: Trying to obtain current memory policy. 00:04:05.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.360 EAL: Restoring previous memory policy: 4 00:04:05.360 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.360 EAL: request: mp_malloc_sync 00:04:05.360 EAL: No shared files mode enabled, IPC is disabled 00:04:05.360 EAL: Heap on socket 0 was expanded by 258MB 00:04:05.938 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.938 EAL: request: mp_malloc_sync 00:04:05.938 EAL: No shared files mode enabled, IPC is disabled 00:04:05.938 EAL: Heap on socket 0 was shrunk by 258MB 00:04:06.505 EAL: Trying to obtain current memory policy. 00:04:06.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.505 EAL: Restoring previous memory policy: 4 00:04:06.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.505 EAL: request: mp_malloc_sync 00:04:06.505 EAL: No shared files mode enabled, IPC is disabled 00:04:06.505 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.441 EAL: request: mp_malloc_sync 00:04:07.441 EAL: No shared files mode enabled, IPC is disabled 00:04:07.441 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.375 EAL: Trying to obtain current memory policy. 00:04:08.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.375 EAL: Restoring previous memory policy: 4 00:04:08.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.375 EAL: request: mp_malloc_sync 00:04:08.375 EAL: No shared files mode enabled, IPC is disabled 00:04:08.375 EAL: Heap on socket 0 was expanded by 1026MB 00:04:10.279 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.280 EAL: request: mp_malloc_sync 00:04:10.280 EAL: No shared files mode enabled, IPC is disabled 00:04:10.280 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.183 passed 00:04:12.183 00:04:12.183 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.183 suites 1 1 n/a 0 0 00:04:12.183 tests 2 2 2 0 0 00:04:12.183 asserts 497 497 497 0 n/a 00:04:12.183 00:04:12.183 Elapsed time = 7.643 seconds 00:04:12.183 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.183 EAL: request: mp_malloc_sync 00:04:12.183 EAL: No shared files mode enabled, IPC is disabled 00:04:12.183 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.183 EAL: No shared files mode enabled, IPC is disabled 00:04:12.183 EAL: No shared files mode enabled, IPC is disabled 00:04:12.183 EAL: No shared files mode enabled, IPC is disabled 00:04:12.183 00:04:12.183 real 0m7.862s 00:04:12.183 user 0m7.056s 00:04:12.183 sys 0m0.753s 00:04:12.183 12:07:18 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.183 12:07:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:12.183 ************************************ 00:04:12.183 END TEST env_vtophys 00:04:12.183 ************************************ 00:04:12.183 12:07:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:12.183 12:07:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.183 12:07:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.183 12:07:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.183 ************************************ 00:04:12.183 START TEST env_pci 00:04:12.183 ************************************ 00:04:12.183 12:07:18 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:12.183 00:04:12.183 00:04:12.183 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.183 http://cunit.sourceforge.net/ 00:04:12.183 00:04:12.183 00:04:12.183 Suite: pci 00:04:12.183 Test: pci_hook ...[2024-12-10 12:07:18.700209] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3439918 has claimed it 00:04:12.183 EAL: Cannot find device (10000:00:01.0) 00:04:12.183 EAL: Failed to attach device on primary process 00:04:12.183 passed 00:04:12.183 00:04:12.183 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.183 suites 1 1 n/a 0 0 00:04:12.183 tests 1 1 1 0 0 00:04:12.183 asserts 25 25 25 0 n/a 00:04:12.183 00:04:12.183 Elapsed time = 0.043 seconds 00:04:12.183 00:04:12.183 real 0m0.121s 00:04:12.183 user 0m0.052s 00:04:12.183 sys 0m0.068s 00:04:12.183 12:07:18 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.183 12:07:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:12.183 ************************************ 00:04:12.183 END TEST env_pci 00:04:12.183 ************************************ 00:04:12.183 12:07:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:12.183 12:07:18 env -- env/env.sh@15 -- # uname 00:04:12.183 12:07:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:12.183 12:07:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:12.183 12:07:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.183 12:07:18 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:12.183 12:07:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.183 12:07:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.183 ************************************ 00:04:12.183 START TEST env_dpdk_post_init 00:04:12.183 ************************************ 00:04:12.183 12:07:18 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.183 EAL: Detected CPU lcores: 96 00:04:12.183 EAL: Detected NUMA nodes: 2 00:04:12.183 EAL: Detected shared linkage of DPDK 00:04:12.183 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.183 EAL: Selected IOVA mode 'VA' 00:04:12.183 EAL: VFIO support initialized 00:04:12.183 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.442 EAL: Using IOMMU type 1 (Type 1) 00:04:12.442 EAL: Ignore mapping IO port bar(1) 00:04:12.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:12.442 EAL: Ignore mapping IO port bar(1) 00:04:12.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:12.442 EAL: Ignore mapping IO port bar(1) 00:04:12.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:12.442 EAL: Ignore mapping IO port bar(1) 00:04:12.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:12.442 EAL: Ignore mapping IO port bar(1) 00:04:12.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:12.442 EAL: Ignore mapping IO port bar(1) 00:04:12.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:12.442 EAL: Ignore mapping IO port bar(1) 00:04:12.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:12.442 EAL: Ignore mapping IO port bar(1) 00:04:12.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:13.380 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:13.380 EAL: Ignore mapping IO port bar(1) 00:04:13.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:13.380 EAL: Ignore mapping IO port bar(1) 00:04:13.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:13.380 EAL: Ignore mapping IO port bar(1) 00:04:13.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:13.380 EAL: Ignore mapping IO port bar(1) 00:04:13.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:13.380 EAL: Ignore mapping IO port bar(1) 00:04:13.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:13.380 EAL: Ignore mapping IO port bar(1) 00:04:13.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:13.380 EAL: Ignore mapping IO port bar(1) 00:04:13.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:13.380 EAL: Ignore mapping IO port bar(1) 00:04:13.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:16.664 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:16.664 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:16.664 Starting DPDK initialization... 00:04:16.664 Starting SPDK post initialization... 00:04:16.664 SPDK NVMe probe 00:04:16.664 Attaching to 0000:5e:00.0 00:04:16.664 Attached to 0000:5e:00.0 00:04:16.664 Cleaning up... 00:04:16.664 00:04:16.664 real 0m4.439s 00:04:16.664 user 0m3.020s 00:04:16.664 sys 0m0.487s 00:04:16.664 12:07:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.664 12:07:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.664 ************************************ 00:04:16.664 END TEST env_dpdk_post_init 00:04:16.664 ************************************ 00:04:16.664 12:07:23 env -- env/env.sh@26 -- # uname 00:04:16.664 12:07:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:16.664 12:07:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:16.664 12:07:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.664 12:07:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.664 12:07:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.664 ************************************ 00:04:16.664 START TEST env_mem_callbacks 00:04:16.664 ************************************ 00:04:16.664 12:07:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:16.664 EAL: Detected CPU lcores: 96 00:04:16.664 EAL: Detected NUMA nodes: 2 00:04:16.664 EAL: Detected shared linkage of DPDK 00:04:16.664 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.664 EAL: Selected IOVA mode 'VA' 00:04:16.664 EAL: VFIO support initialized 00:04:16.664 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.664 00:04:16.664 00:04:16.664 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.664 http://cunit.sourceforge.net/ 00:04:16.664 00:04:16.664 00:04:16.664 Suite: memory 00:04:16.664 Test: test ... 00:04:16.664 register 0x200000200000 2097152 00:04:16.664 malloc 3145728 00:04:16.664 register 0x200000400000 4194304 00:04:16.664 buf 0x2000004fffc0 len 3145728 PASSED 00:04:16.664 malloc 64 00:04:16.664 buf 0x2000004ffec0 len 64 PASSED 00:04:16.664 malloc 4194304 00:04:16.664 register 0x200000800000 6291456 00:04:16.664 buf 0x2000009fffc0 len 4194304 PASSED 00:04:16.664 free 0x2000004fffc0 3145728 00:04:16.664 free 0x2000004ffec0 64 00:04:16.664 unregister 0x200000400000 4194304 PASSED 00:04:16.664 free 0x2000009fffc0 4194304 00:04:16.664 unregister 0x200000800000 6291456 PASSED 00:04:16.664 malloc 8388608 00:04:16.664 register 0x200000400000 10485760 00:04:16.923 buf 0x2000005fffc0 len 8388608 PASSED 00:04:16.923 free 0x2000005fffc0 8388608 00:04:16.923 unregister 0x200000400000 10485760 PASSED 00:04:16.923 passed 00:04:16.923 00:04:16.923 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.923 suites 1 1 n/a 0 0 00:04:16.923 tests 1 1 1 0 0 00:04:16.923 asserts 15 15 15 0 n/a 00:04:16.923 00:04:16.923 Elapsed time = 0.068 seconds 00:04:16.923 00:04:16.923 real 0m0.155s 00:04:16.923 user 0m0.090s 00:04:16.923 sys 0m0.065s 00:04:16.923 12:07:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.923 12:07:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:16.923 ************************************ 00:04:16.923 END TEST env_mem_callbacks 00:04:16.923 ************************************ 00:04:16.923 00:04:16.923 real 0m13.347s 00:04:16.923 user 0m10.679s 00:04:16.923 sys 0m1.716s 00:04:16.923 12:07:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.923 12:07:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.923 ************************************ 00:04:16.923 END TEST env 00:04:16.923 ************************************ 00:04:16.923 12:07:23 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:16.923 12:07:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.923 12:07:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.923 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:04:16.923 ************************************ 00:04:16.923 START TEST rpc 00:04:16.923 ************************************ 00:04:16.923 12:07:23 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:16.923 * Looking for test storage... 00:04:16.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.923 12:07:23 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.923 12:07:23 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.923 12:07:23 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.182 12:07:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.182 12:07:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.182 12:07:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.182 12:07:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.182 12:07:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.182 12:07:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.182 12:07:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.182 12:07:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.182 12:07:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.182 12:07:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.182 12:07:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.182 12:07:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.182 12:07:23 rpc -- scripts/common.sh@345 -- # : 1 00:04:17.182 12:07:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.182 12:07:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.182 12:07:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.182 12:07:23 rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.182 12:07:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.182 12:07:23 rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.182 12:07:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.182 12:07:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.182 12:07:23 rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.182 12:07:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.182 12:07:23 rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.182 12:07:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.182 12:07:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.182 12:07:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.182 12:07:23 rpc -- scripts/common.sh@368 -- # return 0 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.182 --rc genhtml_branch_coverage=1 00:04:17.182 --rc genhtml_function_coverage=1 00:04:17.182 --rc genhtml_legend=1 00:04:17.182 --rc geninfo_all_blocks=1 00:04:17.182 --rc geninfo_unexecuted_blocks=1 00:04:17.182 00:04:17.182 ' 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.182 --rc genhtml_branch_coverage=1 00:04:17.182 --rc genhtml_function_coverage=1 00:04:17.182 --rc genhtml_legend=1 00:04:17.182 --rc geninfo_all_blocks=1 00:04:17.182 --rc geninfo_unexecuted_blocks=1 00:04:17.182 00:04:17.182 ' 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.182 --rc genhtml_branch_coverage=1 00:04:17.182 --rc genhtml_function_coverage=1 00:04:17.182 --rc genhtml_legend=1 00:04:17.182 --rc geninfo_all_blocks=1 00:04:17.182 --rc geninfo_unexecuted_blocks=1 00:04:17.182 00:04:17.182 ' 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.182 --rc genhtml_branch_coverage=1 00:04:17.182 --rc genhtml_function_coverage=1 00:04:17.182 --rc genhtml_legend=1 00:04:17.182 --rc geninfo_all_blocks=1 00:04:17.182 --rc geninfo_unexecuted_blocks=1 00:04:17.182 00:04:17.182 ' 00:04:17.182 12:07:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3440865 00:04:17.182 12:07:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.182 12:07:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:17.182 12:07:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3440865 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 3440865 ']' 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.182 12:07:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.183 12:07:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.183 12:07:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.183 [2024-12-10 12:07:23.864848] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:04:17.183 [2024-12-10 12:07:23.864938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440865 ] 00:04:17.183 [2024-12-10 12:07:23.976607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.441 [2024-12-10 12:07:24.081733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:17.441 [2024-12-10 12:07:24.081772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3440865' to capture a snapshot of events at runtime. 00:04:17.441 [2024-12-10 12:07:24.081784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:17.441 [2024-12-10 12:07:24.081812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:17.441 [2024-12-10 12:07:24.081826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3440865 for offline analysis/debug. 00:04:17.441 [2024-12-10 12:07:24.083300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.378 12:07:24 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.378 12:07:24 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:18.378 12:07:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:18.378 12:07:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:18.378 12:07:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:18.378 12:07:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:18.378 12:07:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.378 12:07:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.378 12:07:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.378 ************************************ 00:04:18.378 START TEST rpc_integrity 00:04:18.378 ************************************ 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.378 12:07:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.378 { 00:04:18.378 "name": "Malloc0", 00:04:18.378 "aliases": [ 00:04:18.378 "b360a81e-04d1-4907-b17a-5120dc3e8a6c" 00:04:18.378 ], 00:04:18.378 "product_name": "Malloc disk", 00:04:18.378 "block_size": 512, 00:04:18.378 "num_blocks": 16384, 00:04:18.378 "uuid": "b360a81e-04d1-4907-b17a-5120dc3e8a6c", 00:04:18.378 "assigned_rate_limits": { 00:04:18.378 "rw_ios_per_sec": 0, 00:04:18.378 "rw_mbytes_per_sec": 0, 00:04:18.378 "r_mbytes_per_sec": 0, 00:04:18.378 "w_mbytes_per_sec": 0 00:04:18.378 }, 00:04:18.378 "claimed": false, 00:04:18.378 "zoned": false, 00:04:18.378 "supported_io_types": { 00:04:18.378 "read": true, 00:04:18.378 "write": true, 00:04:18.378 "unmap": true, 00:04:18.378 "flush": true, 00:04:18.378 "reset": true, 00:04:18.378 "nvme_admin": false, 00:04:18.378 "nvme_io": false, 00:04:18.378 "nvme_io_md": false, 00:04:18.378 "write_zeroes": true, 00:04:18.378 "zcopy": true, 00:04:18.378 "get_zone_info": false, 00:04:18.378 "zone_management": false, 00:04:18.378 "zone_append": false, 00:04:18.378 "compare": false, 00:04:18.378 "compare_and_write": false, 00:04:18.378 "abort": true, 00:04:18.378 "seek_hole": false, 00:04:18.378 "seek_data": false, 00:04:18.378 "copy": true, 00:04:18.378 "nvme_iov_md": false 00:04:18.378 }, 00:04:18.378 "memory_domains": [ 00:04:18.378 { 00:04:18.378 "dma_device_id": "system", 00:04:18.378 "dma_device_type": 1 00:04:18.378 }, 00:04:18.378 { 00:04:18.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.378 "dma_device_type": 2 00:04:18.378 } 00:04:18.378 ], 00:04:18.378 "driver_specific": {} 00:04:18.378 } 00:04:18.378 ]' 00:04:18.378 12:07:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.378 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.378 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:18.378 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.378 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.378 [2024-12-10 12:07:25.029493] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:18.378 [2024-12-10 12:07:25.029539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.378 [2024-12-10 12:07:25.029564] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021c80 00:04:18.378 [2024-12-10 12:07:25.029574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.378 [2024-12-10 12:07:25.031568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.378 [2024-12-10 12:07:25.031595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.378 Passthru0 00:04:18.378 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.378 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.378 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.378 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.378 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.378 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.378 { 00:04:18.378 "name": "Malloc0", 00:04:18.378 "aliases": [ 00:04:18.378 "b360a81e-04d1-4907-b17a-5120dc3e8a6c" 00:04:18.378 ], 00:04:18.378 "product_name": "Malloc disk", 00:04:18.378 "block_size": 512, 00:04:18.378 "num_blocks": 16384, 00:04:18.378 "uuid": "b360a81e-04d1-4907-b17a-5120dc3e8a6c", 00:04:18.378 "assigned_rate_limits": { 00:04:18.378 "rw_ios_per_sec": 0, 00:04:18.378 "rw_mbytes_per_sec": 0, 00:04:18.378 "r_mbytes_per_sec": 0, 00:04:18.378 "w_mbytes_per_sec": 0 00:04:18.378 }, 00:04:18.378 "claimed": true, 00:04:18.378 "claim_type": "exclusive_write", 00:04:18.378 "zoned": false, 00:04:18.378 "supported_io_types": { 00:04:18.378 "read": true, 00:04:18.378 "write": true, 00:04:18.378 "unmap": true, 00:04:18.378 "flush": true, 00:04:18.378 "reset": true, 00:04:18.378 "nvme_admin": false, 00:04:18.378 "nvme_io": false, 00:04:18.378 "nvme_io_md": false, 00:04:18.378 "write_zeroes": true, 00:04:18.378 "zcopy": true, 00:04:18.378 "get_zone_info": false, 00:04:18.378 "zone_management": false, 00:04:18.378 "zone_append": false, 00:04:18.378 "compare": false, 00:04:18.378 "compare_and_write": false, 00:04:18.378 "abort": true, 00:04:18.378 "seek_hole": false, 00:04:18.378 "seek_data": false, 00:04:18.378 "copy": true, 00:04:18.378 "nvme_iov_md": false 00:04:18.378 }, 00:04:18.378 "memory_domains": [ 00:04:18.378 { 00:04:18.378 "dma_device_id": "system", 00:04:18.378 "dma_device_type": 1 00:04:18.378 }, 00:04:18.378 { 00:04:18.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.378 "dma_device_type": 2 00:04:18.378 } 00:04:18.378 ], 00:04:18.378 "driver_specific": {} 00:04:18.378 }, 00:04:18.378 { 00:04:18.378 "name": "Passthru0", 00:04:18.378 "aliases": [ 00:04:18.378 "fb019676-6687-5095-9df2-e5d31a43a0f3" 00:04:18.378 ], 00:04:18.378 "product_name": "passthru", 00:04:18.378 "block_size": 512, 00:04:18.378 "num_blocks": 16384, 00:04:18.378 "uuid": "fb019676-6687-5095-9df2-e5d31a43a0f3", 00:04:18.378 "assigned_rate_limits": { 00:04:18.378 "rw_ios_per_sec": 0, 00:04:18.378 "rw_mbytes_per_sec": 0, 00:04:18.378 "r_mbytes_per_sec": 0, 00:04:18.378 "w_mbytes_per_sec": 0 00:04:18.378 }, 00:04:18.378 "claimed": false, 00:04:18.378 "zoned": false, 00:04:18.378 "supported_io_types": { 00:04:18.378 "read": true, 00:04:18.378 "write": true, 00:04:18.378 "unmap": true, 00:04:18.378 "flush": true, 00:04:18.378 "reset": true, 00:04:18.378 "nvme_admin": false, 00:04:18.378 "nvme_io": false, 00:04:18.378 "nvme_io_md": false, 00:04:18.378 "write_zeroes": true, 00:04:18.378 "zcopy": true, 00:04:18.378 "get_zone_info": false, 00:04:18.378 "zone_management": false, 00:04:18.378 "zone_append": false, 00:04:18.378 "compare": false, 00:04:18.378 "compare_and_write": false, 00:04:18.378 "abort": true, 00:04:18.378 "seek_hole": false, 00:04:18.378 "seek_data": false, 00:04:18.379 "copy": true, 00:04:18.379 "nvme_iov_md": false 00:04:18.379 }, 00:04:18.379 "memory_domains": [ 00:04:18.379 { 00:04:18.379 "dma_device_id": "system", 00:04:18.379 "dma_device_type": 1 00:04:18.379 }, 00:04:18.379 { 00:04:18.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.379 "dma_device_type": 2 00:04:18.379 } 00:04:18.379 ], 00:04:18.379 "driver_specific": { 00:04:18.379 "passthru": { 00:04:18.379 "name": "Passthru0", 00:04:18.379 "base_bdev_name": "Malloc0" 00:04:18.379 } 00:04:18.379 } 00:04:18.379 } 00:04:18.379 ]' 00:04:18.379 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:18.379 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.379 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.379 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.379 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.379 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.379 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:18.379 12:07:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.379 00:04:18.379 real 0m0.264s 00:04:18.379 user 0m0.136s 00:04:18.379 sys 0m0.030s 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.379 12:07:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.379 ************************************ 00:04:18.379 END TEST rpc_integrity 00:04:18.379 ************************************ 00:04:18.379 12:07:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:18.379 12:07:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.379 12:07:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.379 12:07:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.644 ************************************ 00:04:18.644 START TEST rpc_plugins 00:04:18.644 ************************************ 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:18.644 { 00:04:18.644 "name": "Malloc1", 00:04:18.644 "aliases": [ 00:04:18.644 "3f672cbd-a126-44fb-96a1-c0a5b3999a67" 00:04:18.644 ], 00:04:18.644 "product_name": "Malloc disk", 00:04:18.644 "block_size": 4096, 00:04:18.644 "num_blocks": 256, 00:04:18.644 "uuid": "3f672cbd-a126-44fb-96a1-c0a5b3999a67", 00:04:18.644 "assigned_rate_limits": { 00:04:18.644 "rw_ios_per_sec": 0, 00:04:18.644 "rw_mbytes_per_sec": 0, 00:04:18.644 "r_mbytes_per_sec": 0, 00:04:18.644 "w_mbytes_per_sec": 0 00:04:18.644 }, 00:04:18.644 "claimed": false, 00:04:18.644 "zoned": false, 00:04:18.644 "supported_io_types": { 00:04:18.644 "read": true, 00:04:18.644 "write": true, 00:04:18.644 "unmap": true, 00:04:18.644 "flush": true, 00:04:18.644 "reset": true, 00:04:18.644 "nvme_admin": false, 00:04:18.644 "nvme_io": false, 00:04:18.644 "nvme_io_md": false, 00:04:18.644 "write_zeroes": true, 00:04:18.644 "zcopy": true, 00:04:18.644 "get_zone_info": false, 00:04:18.644 "zone_management": false, 00:04:18.644 "zone_append": false, 00:04:18.644 "compare": false, 00:04:18.644 "compare_and_write": false, 00:04:18.644 "abort": true, 00:04:18.644 "seek_hole": false, 00:04:18.644 "seek_data": false, 00:04:18.644 "copy": true, 00:04:18.644 "nvme_iov_md": false 00:04:18.644 }, 00:04:18.644 "memory_domains": [ 00:04:18.644 { 00:04:18.644 "dma_device_id": "system", 00:04:18.644 "dma_device_type": 1 00:04:18.644 }, 00:04:18.644 { 00:04:18.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.644 "dma_device_type": 2 00:04:18.644 } 00:04:18.644 ], 00:04:18.644 "driver_specific": {} 00:04:18.644 } 00:04:18.644 ]' 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:18.644 12:07:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:18.644 00:04:18.644 real 0m0.129s 00:04:18.644 user 0m0.073s 00:04:18.644 sys 0m0.012s 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.644 12:07:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.644 ************************************ 00:04:18.644 END TEST rpc_plugins 00:04:18.644 ************************************ 00:04:18.644 12:07:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:18.644 12:07:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.644 12:07:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.644 12:07:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.644 ************************************ 00:04:18.644 START TEST rpc_trace_cmd_test 00:04:18.644 ************************************ 00:04:18.644 12:07:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:18.644 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:18.644 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:18.644 12:07:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.644 12:07:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.644 12:07:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.644 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:18.645 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3440865", 00:04:18.645 "tpoint_group_mask": "0x8", 00:04:18.645 "iscsi_conn": { 00:04:18.645 "mask": "0x2", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "scsi": { 00:04:18.645 "mask": "0x4", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "bdev": { 00:04:18.645 "mask": "0x8", 00:04:18.645 "tpoint_mask": "0xffffffffffffffff" 00:04:18.645 }, 00:04:18.645 "nvmf_rdma": { 00:04:18.645 "mask": "0x10", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "nvmf_tcp": { 00:04:18.645 "mask": "0x20", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "ftl": { 00:04:18.645 "mask": "0x40", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "blobfs": { 00:04:18.645 "mask": "0x80", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "dsa": { 00:04:18.645 "mask": "0x200", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "thread": { 00:04:18.645 "mask": "0x400", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "nvme_pcie": { 00:04:18.645 "mask": "0x800", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "iaa": { 00:04:18.645 "mask": "0x1000", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "nvme_tcp": { 00:04:18.645 "mask": "0x2000", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "bdev_nvme": { 00:04:18.645 "mask": "0x4000", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "sock": { 00:04:18.645 "mask": "0x8000", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "blob": { 00:04:18.645 "mask": "0x10000", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "bdev_raid": { 00:04:18.645 "mask": "0x20000", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 }, 00:04:18.645 "scheduler": { 00:04:18.645 "mask": "0x40000", 00:04:18.645 "tpoint_mask": "0x0" 00:04:18.645 } 00:04:18.645 }' 00:04:18.645 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:18.903 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:18.904 00:04:18.904 real 0m0.196s 00:04:18.904 user 0m0.159s 00:04:18.904 sys 0m0.028s 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.904 12:07:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.904 ************************************ 00:04:18.904 END TEST rpc_trace_cmd_test 00:04:18.904 ************************************ 00:04:18.904 12:07:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:18.904 12:07:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:18.904 12:07:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:18.904 12:07:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.904 12:07:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.904 12:07:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.904 ************************************ 00:04:18.904 START TEST rpc_daemon_integrity 00:04:18.904 ************************************ 00:04:18.904 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:18.904 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.904 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.904 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.904 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.904 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.904 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.163 { 00:04:19.163 "name": "Malloc2", 00:04:19.163 "aliases": [ 00:04:19.163 "0b9f8305-4789-4869-add0-1943f9d7b86f" 00:04:19.163 ], 00:04:19.163 "product_name": "Malloc disk", 00:04:19.163 "block_size": 512, 00:04:19.163 "num_blocks": 16384, 00:04:19.163 "uuid": "0b9f8305-4789-4869-add0-1943f9d7b86f", 00:04:19.163 "assigned_rate_limits": { 00:04:19.163 "rw_ios_per_sec": 0, 00:04:19.163 "rw_mbytes_per_sec": 0, 00:04:19.163 "r_mbytes_per_sec": 0, 00:04:19.163 "w_mbytes_per_sec": 0 00:04:19.163 }, 00:04:19.163 "claimed": false, 00:04:19.163 "zoned": false, 00:04:19.163 "supported_io_types": { 00:04:19.163 "read": true, 00:04:19.163 "write": true, 00:04:19.163 "unmap": true, 00:04:19.163 "flush": true, 00:04:19.163 "reset": true, 00:04:19.163 "nvme_admin": false, 00:04:19.163 "nvme_io": false, 00:04:19.163 "nvme_io_md": false, 00:04:19.163 "write_zeroes": true, 00:04:19.163 "zcopy": true, 00:04:19.163 "get_zone_info": false, 00:04:19.163 "zone_management": false, 00:04:19.163 "zone_append": false, 00:04:19.163 "compare": false, 00:04:19.163 "compare_and_write": false, 00:04:19.163 "abort": true, 00:04:19.163 "seek_hole": false, 00:04:19.163 "seek_data": false, 00:04:19.163 "copy": true, 00:04:19.163 "nvme_iov_md": false 00:04:19.163 }, 00:04:19.163 "memory_domains": [ 00:04:19.163 { 00:04:19.163 "dma_device_id": "system", 00:04:19.163 "dma_device_type": 1 00:04:19.163 }, 00:04:19.163 { 00:04:19.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.163 "dma_device_type": 2 00:04:19.163 } 00:04:19.163 ], 00:04:19.163 "driver_specific": {} 00:04:19.163 } 00:04:19.163 ]' 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.163 [2024-12-10 12:07:25.813786] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:19.163 [2024-12-10 12:07:25.813825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.163 [2024-12-10 12:07:25.813845] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022e80 00:04:19.163 [2024-12-10 12:07:25.813854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.163 [2024-12-10 12:07:25.815775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.163 [2024-12-10 12:07:25.815800] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.163 Passthru0 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.163 { 00:04:19.163 "name": "Malloc2", 00:04:19.163 "aliases": [ 00:04:19.163 "0b9f8305-4789-4869-add0-1943f9d7b86f" 00:04:19.163 ], 00:04:19.163 "product_name": "Malloc disk", 00:04:19.163 "block_size": 512, 00:04:19.163 "num_blocks": 16384, 00:04:19.163 "uuid": "0b9f8305-4789-4869-add0-1943f9d7b86f", 00:04:19.163 "assigned_rate_limits": { 00:04:19.163 "rw_ios_per_sec": 0, 00:04:19.163 "rw_mbytes_per_sec": 0, 00:04:19.163 "r_mbytes_per_sec": 0, 00:04:19.163 "w_mbytes_per_sec": 0 00:04:19.163 }, 00:04:19.163 "claimed": true, 00:04:19.163 "claim_type": "exclusive_write", 00:04:19.163 "zoned": false, 00:04:19.163 "supported_io_types": { 00:04:19.163 "read": true, 00:04:19.163 "write": true, 00:04:19.163 "unmap": true, 00:04:19.163 "flush": true, 00:04:19.163 "reset": true, 00:04:19.163 "nvme_admin": false, 00:04:19.163 "nvme_io": false, 00:04:19.163 "nvme_io_md": false, 00:04:19.163 "write_zeroes": true, 00:04:19.163 "zcopy": true, 00:04:19.163 "get_zone_info": false, 00:04:19.163 "zone_management": false, 00:04:19.163 "zone_append": false, 00:04:19.163 "compare": false, 00:04:19.163 "compare_and_write": false, 00:04:19.163 "abort": true, 00:04:19.163 "seek_hole": false, 00:04:19.163 "seek_data": false, 00:04:19.163 "copy": true, 00:04:19.163 "nvme_iov_md": false 00:04:19.163 }, 00:04:19.163 "memory_domains": [ 00:04:19.163 { 00:04:19.163 "dma_device_id": "system", 00:04:19.163 "dma_device_type": 1 00:04:19.163 }, 00:04:19.163 { 00:04:19.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.163 "dma_device_type": 2 00:04:19.163 } 00:04:19.163 ], 00:04:19.163 "driver_specific": {} 00:04:19.163 }, 00:04:19.163 { 00:04:19.163 "name": "Passthru0", 00:04:19.163 "aliases": [ 00:04:19.163 "ca34009c-df6f-578b-bb6b-49d6885ed040" 00:04:19.163 ], 00:04:19.163 "product_name": "passthru", 00:04:19.163 "block_size": 512, 00:04:19.163 "num_blocks": 16384, 00:04:19.163 "uuid": "ca34009c-df6f-578b-bb6b-49d6885ed040", 00:04:19.163 "assigned_rate_limits": { 00:04:19.163 "rw_ios_per_sec": 0, 00:04:19.163 "rw_mbytes_per_sec": 0, 00:04:19.163 "r_mbytes_per_sec": 0, 00:04:19.163 "w_mbytes_per_sec": 0 00:04:19.163 }, 00:04:19.163 "claimed": false, 00:04:19.163 "zoned": false, 00:04:19.163 "supported_io_types": { 00:04:19.163 "read": true, 00:04:19.163 "write": true, 00:04:19.163 "unmap": true, 00:04:19.163 "flush": true, 00:04:19.163 "reset": true, 00:04:19.163 "nvme_admin": false, 00:04:19.163 "nvme_io": false, 00:04:19.163 "nvme_io_md": false, 00:04:19.163 "write_zeroes": true, 00:04:19.163 "zcopy": true, 00:04:19.163 "get_zone_info": false, 00:04:19.163 "zone_management": false, 00:04:19.163 "zone_append": false, 00:04:19.163 "compare": false, 00:04:19.163 "compare_and_write": false, 00:04:19.163 "abort": true, 00:04:19.163 "seek_hole": false, 00:04:19.163 "seek_data": false, 00:04:19.163 "copy": true, 00:04:19.163 "nvme_iov_md": false 00:04:19.163 }, 00:04:19.163 "memory_domains": [ 00:04:19.163 { 00:04:19.163 "dma_device_id": "system", 00:04:19.163 "dma_device_type": 1 00:04:19.163 }, 00:04:19.163 { 00:04:19.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.163 "dma_device_type": 2 00:04:19.163 } 00:04:19.163 ], 00:04:19.163 "driver_specific": { 00:04:19.163 "passthru": { 00:04:19.163 "name": "Passthru0", 00:04:19.163 "base_bdev_name": "Malloc2" 00:04:19.163 } 00:04:19.163 } 00:04:19.163 } 00:04:19.163 ]' 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.163 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.164 00:04:19.164 real 0m0.258s 00:04:19.164 user 0m0.146s 00:04:19.164 sys 0m0.029s 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.164 12:07:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.164 ************************************ 00:04:19.164 END TEST rpc_daemon_integrity 00:04:19.164 ************************************ 00:04:19.164 12:07:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:19.164 12:07:25 rpc -- rpc/rpc.sh@84 -- # killprocess 3440865 00:04:19.164 12:07:25 rpc -- common/autotest_common.sh@954 -- # '[' -z 3440865 ']' 00:04:19.164 12:07:25 rpc -- common/autotest_common.sh@958 -- # kill -0 3440865 00:04:19.164 12:07:25 rpc -- common/autotest_common.sh@959 -- # uname 00:04:19.164 12:07:25 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.423 12:07:25 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3440865 00:04:19.423 12:07:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.423 12:07:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.423 12:07:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3440865' 00:04:19.423 killing process with pid 3440865 00:04:19.423 12:07:26 rpc -- common/autotest_common.sh@973 -- # kill 3440865 00:04:19.423 12:07:26 rpc -- common/autotest_common.sh@978 -- # wait 3440865 00:04:21.955 00:04:21.955 real 0m4.708s 00:04:21.955 user 0m5.213s 00:04:21.955 sys 0m0.772s 00:04:21.955 12:07:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.955 12:07:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.955 ************************************ 00:04:21.955 END TEST rpc 00:04:21.955 ************************************ 00:04:21.955 12:07:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:21.955 12:07:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.955 12:07:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.955 12:07:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.955 ************************************ 00:04:21.955 START TEST skip_rpc 00:04:21.955 ************************************ 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:21.955 * Looking for test storage... 00:04:21.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.955 12:07:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.955 --rc genhtml_branch_coverage=1 00:04:21.955 --rc genhtml_function_coverage=1 00:04:21.955 --rc genhtml_legend=1 00:04:21.955 --rc geninfo_all_blocks=1 00:04:21.955 --rc geninfo_unexecuted_blocks=1 00:04:21.955 00:04:21.955 ' 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.955 --rc genhtml_branch_coverage=1 00:04:21.955 --rc genhtml_function_coverage=1 00:04:21.955 --rc genhtml_legend=1 00:04:21.955 --rc geninfo_all_blocks=1 00:04:21.955 --rc geninfo_unexecuted_blocks=1 00:04:21.955 00:04:21.955 ' 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.955 --rc genhtml_branch_coverage=1 00:04:21.955 --rc genhtml_function_coverage=1 00:04:21.955 --rc genhtml_legend=1 00:04:21.955 --rc geninfo_all_blocks=1 00:04:21.955 --rc geninfo_unexecuted_blocks=1 00:04:21.955 00:04:21.955 ' 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.955 --rc genhtml_branch_coverage=1 00:04:21.955 --rc genhtml_function_coverage=1 00:04:21.955 --rc genhtml_legend=1 00:04:21.955 --rc geninfo_all_blocks=1 00:04:21.955 --rc geninfo_unexecuted_blocks=1 00:04:21.955 00:04:21.955 ' 00:04:21.955 12:07:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.955 12:07:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:21.955 12:07:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.955 12:07:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.955 ************************************ 00:04:21.955 START TEST skip_rpc 00:04:21.955 ************************************ 00:04:21.955 12:07:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:21.955 12:07:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3441825 00:04:21.955 12:07:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.955 12:07:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:21.955 12:07:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:21.955 [2024-12-10 12:07:28.691684] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:04:21.955 [2024-12-10 12:07:28.691763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441825 ] 00:04:22.274 [2024-12-10 12:07:28.802561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.274 [2024-12-10 12:07:28.905738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3441825 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3441825 ']' 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3441825 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3441825 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3441825' 00:04:27.540 killing process with pid 3441825 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3441825 00:04:27.540 12:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3441825 00:04:29.450 00:04:29.450 real 0m7.370s 00:04:29.450 user 0m6.992s 00:04:29.450 sys 0m0.394s 00:04:29.450 12:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.450 12:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.450 ************************************ 00:04:29.450 END TEST skip_rpc 00:04:29.450 ************************************ 00:04:29.450 12:07:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.450 12:07:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.450 12:07:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.450 12:07:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.450 ************************************ 00:04:29.450 START TEST skip_rpc_with_json 00:04:29.450 ************************************ 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3443180 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3443180 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3443180 ']' 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.450 12:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.450 [2024-12-10 12:07:36.124212] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:04:29.450 [2024-12-10 12:07:36.124304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443180 ] 00:04:29.450 [2024-12-10 12:07:36.235448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.710 [2024-12-10 12:07:36.339922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.646 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.646 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:30.646 12:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.646 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.646 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.646 [2024-12-10 12:07:37.148215] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.646 request: 00:04:30.646 { 00:04:30.646 "trtype": "tcp", 00:04:30.646 "method": "nvmf_get_transports", 00:04:30.646 "req_id": 1 00:04:30.646 } 00:04:30.646 Got JSON-RPC error response 00:04:30.646 response: 00:04:30.646 { 00:04:30.646 "code": -19, 00:04:30.646 "message": "No such device" 00:04:30.646 } 00:04:30.646 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.646 12:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.646 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.647 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.647 [2024-12-10 12:07:37.156321] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.647 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.647 12:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.647 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.647 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.647 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.647 12:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.647 { 00:04:30.647 "subsystems": [ 00:04:30.647 { 00:04:30.647 "subsystem": "fsdev", 00:04:30.647 "config": [ 00:04:30.647 { 00:04:30.647 "method": "fsdev_set_opts", 00:04:30.647 "params": { 00:04:30.647 "fsdev_io_pool_size": 65535, 00:04:30.647 "fsdev_io_cache_size": 256 00:04:30.647 } 00:04:30.647 } 00:04:30.647 ] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "keyring", 00:04:30.647 "config": [] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "iobuf", 00:04:30.647 "config": [ 00:04:30.647 { 00:04:30.647 "method": "iobuf_set_options", 00:04:30.647 "params": { 00:04:30.647 "small_pool_count": 8192, 00:04:30.647 "large_pool_count": 1024, 00:04:30.647 "small_bufsize": 8192, 00:04:30.647 "large_bufsize": 135168, 00:04:30.647 "enable_numa": false 00:04:30.647 } 00:04:30.647 } 00:04:30.647 ] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "sock", 00:04:30.647 "config": [ 00:04:30.647 { 00:04:30.647 "method": "sock_set_default_impl", 00:04:30.647 "params": { 00:04:30.647 "impl_name": "posix" 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "sock_impl_set_options", 00:04:30.647 "params": { 00:04:30.647 "impl_name": "ssl", 00:04:30.647 "recv_buf_size": 4096, 00:04:30.647 "send_buf_size": 4096, 00:04:30.647 "enable_recv_pipe": true, 00:04:30.647 "enable_quickack": false, 00:04:30.647 "enable_placement_id": 0, 00:04:30.647 "enable_zerocopy_send_server": true, 00:04:30.647 "enable_zerocopy_send_client": false, 00:04:30.647 "zerocopy_threshold": 0, 00:04:30.647 "tls_version": 0, 00:04:30.647 "enable_ktls": false 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "sock_impl_set_options", 00:04:30.647 "params": { 00:04:30.647 "impl_name": "posix", 00:04:30.647 "recv_buf_size": 2097152, 00:04:30.647 "send_buf_size": 2097152, 00:04:30.647 "enable_recv_pipe": true, 00:04:30.647 "enable_quickack": false, 00:04:30.647 "enable_placement_id": 0, 00:04:30.647 "enable_zerocopy_send_server": true, 00:04:30.647 "enable_zerocopy_send_client": false, 00:04:30.647 "zerocopy_threshold": 0, 00:04:30.647 "tls_version": 0, 00:04:30.647 "enable_ktls": false 00:04:30.647 } 00:04:30.647 } 00:04:30.647 ] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "vmd", 00:04:30.647 "config": [] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "accel", 00:04:30.647 "config": [ 00:04:30.647 { 00:04:30.647 "method": "accel_set_options", 00:04:30.647 "params": { 00:04:30.647 "small_cache_size": 128, 00:04:30.647 "large_cache_size": 16, 00:04:30.647 "task_count": 2048, 00:04:30.647 "sequence_count": 2048, 00:04:30.647 "buf_count": 2048 00:04:30.647 } 00:04:30.647 } 00:04:30.647 ] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "bdev", 00:04:30.647 "config": [ 00:04:30.647 { 00:04:30.647 "method": "bdev_set_options", 00:04:30.647 "params": { 00:04:30.647 "bdev_io_pool_size": 65535, 00:04:30.647 "bdev_io_cache_size": 256, 00:04:30.647 "bdev_auto_examine": true, 00:04:30.647 "iobuf_small_cache_size": 128, 00:04:30.647 "iobuf_large_cache_size": 16 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "bdev_raid_set_options", 00:04:30.647 "params": { 00:04:30.647 "process_window_size_kb": 1024, 00:04:30.647 "process_max_bandwidth_mb_sec": 0 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "bdev_iscsi_set_options", 00:04:30.647 "params": { 00:04:30.647 "timeout_sec": 30 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "bdev_nvme_set_options", 00:04:30.647 "params": { 00:04:30.647 "action_on_timeout": "none", 00:04:30.647 "timeout_us": 0, 00:04:30.647 "timeout_admin_us": 0, 00:04:30.647 "keep_alive_timeout_ms": 10000, 00:04:30.647 "arbitration_burst": 0, 00:04:30.647 "low_priority_weight": 0, 00:04:30.647 "medium_priority_weight": 0, 00:04:30.647 "high_priority_weight": 0, 00:04:30.647 "nvme_adminq_poll_period_us": 10000, 00:04:30.647 "nvme_ioq_poll_period_us": 0, 00:04:30.647 "io_queue_requests": 0, 00:04:30.647 "delay_cmd_submit": true, 00:04:30.647 "transport_retry_count": 4, 00:04:30.647 "bdev_retry_count": 3, 00:04:30.647 "transport_ack_timeout": 0, 00:04:30.647 "ctrlr_loss_timeout_sec": 0, 00:04:30.647 "reconnect_delay_sec": 0, 00:04:30.647 "fast_io_fail_timeout_sec": 0, 00:04:30.647 "disable_auto_failback": false, 00:04:30.647 "generate_uuids": false, 00:04:30.647 "transport_tos": 0, 00:04:30.647 "nvme_error_stat": false, 00:04:30.647 "rdma_srq_size": 0, 00:04:30.647 "io_path_stat": false, 00:04:30.647 "allow_accel_sequence": false, 00:04:30.647 "rdma_max_cq_size": 0, 00:04:30.647 "rdma_cm_event_timeout_ms": 0, 00:04:30.647 "dhchap_digests": [ 00:04:30.647 "sha256", 00:04:30.647 "sha384", 00:04:30.647 "sha512" 00:04:30.647 ], 00:04:30.647 "dhchap_dhgroups": [ 00:04:30.647 "null", 00:04:30.647 "ffdhe2048", 00:04:30.647 "ffdhe3072", 00:04:30.647 "ffdhe4096", 00:04:30.647 "ffdhe6144", 00:04:30.647 "ffdhe8192" 00:04:30.647 ] 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "bdev_nvme_set_hotplug", 00:04:30.647 "params": { 00:04:30.647 "period_us": 100000, 00:04:30.647 "enable": false 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "bdev_wait_for_examine" 00:04:30.647 } 00:04:30.647 ] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "scsi", 00:04:30.647 "config": null 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "scheduler", 00:04:30.647 "config": [ 00:04:30.647 { 00:04:30.647 "method": "framework_set_scheduler", 00:04:30.647 "params": { 00:04:30.647 "name": "static" 00:04:30.647 } 00:04:30.647 } 00:04:30.647 ] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "vhost_scsi", 00:04:30.647 "config": [] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "vhost_blk", 00:04:30.647 "config": [] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "ublk", 00:04:30.647 "config": [] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "nbd", 00:04:30.647 "config": [] 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "subsystem": "nvmf", 00:04:30.647 "config": [ 00:04:30.647 { 00:04:30.647 "method": "nvmf_set_config", 00:04:30.647 "params": { 00:04:30.647 "discovery_filter": "match_any", 00:04:30.647 "admin_cmd_passthru": { 00:04:30.647 "identify_ctrlr": false 00:04:30.647 }, 00:04:30.647 "dhchap_digests": [ 00:04:30.647 "sha256", 00:04:30.647 "sha384", 00:04:30.647 "sha512" 00:04:30.647 ], 00:04:30.647 "dhchap_dhgroups": [ 00:04:30.647 "null", 00:04:30.647 "ffdhe2048", 00:04:30.647 "ffdhe3072", 00:04:30.647 "ffdhe4096", 00:04:30.647 "ffdhe6144", 00:04:30.647 "ffdhe8192" 00:04:30.647 ] 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "nvmf_set_max_subsystems", 00:04:30.647 "params": { 00:04:30.647 "max_subsystems": 1024 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "nvmf_set_crdt", 00:04:30.647 "params": { 00:04:30.647 "crdt1": 0, 00:04:30.647 "crdt2": 0, 00:04:30.647 "crdt3": 0 00:04:30.647 } 00:04:30.647 }, 00:04:30.647 { 00:04:30.647 "method": "nvmf_create_transport", 00:04:30.647 "params": { 00:04:30.647 "trtype": "TCP", 00:04:30.647 "max_queue_depth": 128, 00:04:30.647 "max_io_qpairs_per_ctrlr": 127, 00:04:30.647 "in_capsule_data_size": 4096, 00:04:30.647 "max_io_size": 131072, 00:04:30.647 "io_unit_size": 131072, 00:04:30.647 "max_aq_depth": 128, 00:04:30.647 "num_shared_buffers": 511, 00:04:30.647 "buf_cache_size": 4294967295, 00:04:30.647 "dif_insert_or_strip": false, 00:04:30.647 "zcopy": false, 00:04:30.647 "c2h_success": true, 00:04:30.647 "sock_priority": 0, 00:04:30.647 "abort_timeout_sec": 1, 00:04:30.647 "ack_timeout": 0, 00:04:30.647 "data_wr_pool_size": 0 00:04:30.647 } 00:04:30.647 } 00:04:30.648 ] 00:04:30.648 }, 00:04:30.648 { 00:04:30.648 "subsystem": "iscsi", 00:04:30.648 "config": [ 00:04:30.648 { 00:04:30.648 "method": "iscsi_set_options", 00:04:30.648 "params": { 00:04:30.648 "node_base": "iqn.2016-06.io.spdk", 00:04:30.648 "max_sessions": 128, 00:04:30.648 "max_connections_per_session": 2, 00:04:30.648 "max_queue_depth": 64, 00:04:30.648 "default_time2wait": 2, 00:04:30.648 "default_time2retain": 20, 00:04:30.648 "first_burst_length": 8192, 00:04:30.648 "immediate_data": true, 00:04:30.648 "allow_duplicated_isid": false, 00:04:30.648 "error_recovery_level": 0, 00:04:30.648 "nop_timeout": 60, 00:04:30.648 "nop_in_interval": 30, 00:04:30.648 "disable_chap": false, 00:04:30.648 "require_chap": false, 00:04:30.648 "mutual_chap": false, 00:04:30.648 "chap_group": 0, 00:04:30.648 "max_large_datain_per_connection": 64, 00:04:30.648 "max_r2t_per_connection": 4, 00:04:30.648 "pdu_pool_size": 36864, 00:04:30.648 "immediate_data_pool_size": 16384, 00:04:30.648 "data_out_pool_size": 2048 00:04:30.648 } 00:04:30.648 } 00:04:30.648 ] 00:04:30.648 } 00:04:30.648 ] 00:04:30.648 } 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3443180 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3443180 ']' 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3443180 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3443180 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3443180' 00:04:30.648 killing process with pid 3443180 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3443180 00:04:30.648 12:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3443180 00:04:33.182 12:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3443661 00:04:33.182 12:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.182 12:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3443661 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3443661 ']' 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3443661 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3443661 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3443661' 00:04:38.451 killing process with pid 3443661 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3443661 00:04:38.451 12:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3443661 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.356 00:04:40.356 real 0m10.989s 00:04:40.356 user 0m10.551s 00:04:40.356 sys 0m0.872s 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.356 ************************************ 00:04:40.356 END TEST skip_rpc_with_json 00:04:40.356 ************************************ 00:04:40.356 12:07:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.356 12:07:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.356 12:07:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.356 12:07:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.356 ************************************ 00:04:40.356 START TEST skip_rpc_with_delay 00:04:40.356 ************************************ 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.356 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.356 [2024-12-10 12:07:47.175241] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.615 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:40.615 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.615 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.615 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.615 00:04:40.615 real 0m0.137s 00:04:40.615 user 0m0.072s 00:04:40.615 sys 0m0.064s 00:04:40.615 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.615 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.615 ************************************ 00:04:40.615 END TEST skip_rpc_with_delay 00:04:40.615 ************************************ 00:04:40.615 12:07:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.615 12:07:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.615 12:07:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.615 12:07:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.615 12:07:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.615 12:07:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.615 ************************************ 00:04:40.615 START TEST exit_on_failed_rpc_init 00:04:40.615 ************************************ 00:04:40.615 12:07:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:40.615 12:07:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3445054 00:04:40.615 12:07:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3445054 00:04:40.615 12:07:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3445054 ']' 00:04:40.615 12:07:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.615 12:07:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.616 12:07:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.616 12:07:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.616 12:07:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.616 12:07:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.616 [2024-12-10 12:07:47.380474] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:04:40.616 [2024-12-10 12:07:47.380566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445054 ] 00:04:40.875 [2024-12-10 12:07:47.491924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.875 [2024-12-10 12:07:47.590287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.813 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.813 [2024-12-10 12:07:48.510774] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:04:41.813 [2024-12-10 12:07:48.510877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445283 ] 00:04:41.813 [2024-12-10 12:07:48.623246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.072 [2024-12-10 12:07:48.730660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.072 [2024-12-10 12:07:48.730736] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:42.072 [2024-12-10 12:07:48.730756] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:42.072 [2024-12-10 12:07:48.730765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3445054 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3445054 ']' 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3445054 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.331 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3445054 00:04:42.331 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.331 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.331 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3445054' 00:04:42.331 killing process with pid 3445054 00:04:42.331 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3445054 00:04:42.331 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3445054 00:04:44.867 00:04:44.867 real 0m4.047s 00:04:44.867 user 0m4.405s 00:04:44.867 sys 0m0.583s 00:04:44.867 12:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.867 12:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.867 ************************************ 00:04:44.867 END TEST exit_on_failed_rpc_init 00:04:44.867 ************************************ 00:04:44.867 12:07:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.867 00:04:44.867 real 0m22.979s 00:04:44.867 user 0m22.233s 00:04:44.867 sys 0m2.167s 00:04:44.867 12:07:51 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.867 12:07:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.867 ************************************ 00:04:44.867 END TEST skip_rpc 00:04:44.867 ************************************ 00:04:44.867 12:07:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.867 12:07:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.867 12:07:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.867 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:04:44.867 ************************************ 00:04:44.867 START TEST rpc_client 00:04:44.867 ************************************ 00:04:44.867 12:07:51 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.867 * Looking for test storage... 00:04:44.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:44.867 12:07:51 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.867 12:07:51 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.867 12:07:51 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.867 12:07:51 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.867 12:07:51 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.867 12:07:51 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.868 12:07:51 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:44.868 12:07:51 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.868 12:07:51 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.868 --rc genhtml_branch_coverage=1 00:04:44.868 --rc genhtml_function_coverage=1 00:04:44.868 --rc genhtml_legend=1 00:04:44.868 --rc geninfo_all_blocks=1 00:04:44.868 --rc geninfo_unexecuted_blocks=1 00:04:44.868 00:04:44.868 ' 00:04:44.868 12:07:51 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.868 --rc genhtml_branch_coverage=1 00:04:44.868 --rc genhtml_function_coverage=1 00:04:44.868 --rc genhtml_legend=1 00:04:44.868 --rc geninfo_all_blocks=1 00:04:44.868 --rc geninfo_unexecuted_blocks=1 00:04:44.868 00:04:44.868 ' 00:04:44.868 12:07:51 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.868 --rc genhtml_branch_coverage=1 00:04:44.868 --rc genhtml_function_coverage=1 00:04:44.868 --rc genhtml_legend=1 00:04:44.868 --rc geninfo_all_blocks=1 00:04:44.868 --rc geninfo_unexecuted_blocks=1 00:04:44.868 00:04:44.868 ' 00:04:44.868 12:07:51 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.868 --rc genhtml_branch_coverage=1 00:04:44.868 --rc genhtml_function_coverage=1 00:04:44.868 --rc genhtml_legend=1 00:04:44.868 --rc geninfo_all_blocks=1 00:04:44.868 --rc geninfo_unexecuted_blocks=1 00:04:44.868 00:04:44.868 ' 00:04:44.868 12:07:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:44.868 OK 00:04:44.868 12:07:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.868 00:04:44.868 real 0m0.233s 00:04:44.868 user 0m0.128s 00:04:44.868 sys 0m0.117s 00:04:44.868 12:07:51 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.868 12:07:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:44.868 ************************************ 00:04:44.868 END TEST rpc_client 00:04:44.868 ************************************ 00:04:45.128 12:07:51 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:45.128 12:07:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.128 12:07:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.128 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:04:45.128 ************************************ 00:04:45.128 START TEST json_config 00:04:45.128 ************************************ 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.128 12:07:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.128 12:07:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.128 12:07:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.128 12:07:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.128 12:07:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.128 12:07:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.128 12:07:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.128 12:07:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.128 12:07:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.128 12:07:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.128 12:07:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.128 12:07:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:45.128 12:07:51 json_config -- scripts/common.sh@345 -- # : 1 00:04:45.128 12:07:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.128 12:07:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.128 12:07:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:45.128 12:07:51 json_config -- scripts/common.sh@353 -- # local d=1 00:04:45.128 12:07:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.128 12:07:51 json_config -- scripts/common.sh@355 -- # echo 1 00:04:45.128 12:07:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.128 12:07:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:45.128 12:07:51 json_config -- scripts/common.sh@353 -- # local d=2 00:04:45.128 12:07:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.128 12:07:51 json_config -- scripts/common.sh@355 -- # echo 2 00:04:45.128 12:07:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.128 12:07:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.128 12:07:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.128 12:07:51 json_config -- scripts/common.sh@368 -- # return 0 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.128 --rc genhtml_branch_coverage=1 00:04:45.128 --rc genhtml_function_coverage=1 00:04:45.128 --rc genhtml_legend=1 00:04:45.128 --rc geninfo_all_blocks=1 00:04:45.128 --rc geninfo_unexecuted_blocks=1 00:04:45.128 00:04:45.128 ' 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.128 --rc genhtml_branch_coverage=1 00:04:45.128 --rc genhtml_function_coverage=1 00:04:45.128 --rc genhtml_legend=1 00:04:45.128 --rc geninfo_all_blocks=1 00:04:45.128 --rc geninfo_unexecuted_blocks=1 00:04:45.128 00:04:45.128 ' 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.128 --rc genhtml_branch_coverage=1 00:04:45.128 --rc genhtml_function_coverage=1 00:04:45.128 --rc genhtml_legend=1 00:04:45.128 --rc geninfo_all_blocks=1 00:04:45.128 --rc geninfo_unexecuted_blocks=1 00:04:45.128 00:04:45.128 ' 00:04:45.128 12:07:51 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.128 --rc genhtml_branch_coverage=1 00:04:45.128 --rc genhtml_function_coverage=1 00:04:45.128 --rc genhtml_legend=1 00:04:45.128 --rc geninfo_all_blocks=1 00:04:45.128 --rc geninfo_unexecuted_blocks=1 00:04:45.128 00:04:45.128 ' 00:04:45.128 12:07:51 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:45.128 12:07:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.128 12:07:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.128 12:07:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.128 12:07:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.128 12:07:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.128 12:07:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.128 12:07:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.128 12:07:51 json_config -- paths/export.sh@5 -- # export PATH 00:04:45.128 12:07:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.128 12:07:51 json_config -- nvmf/common.sh@51 -- # : 0 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.129 12:07:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:45.129 INFO: JSON configuration test init 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 12:07:51 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:45.129 12:07:51 json_config -- json_config/common.sh@9 -- # local app=target 00:04:45.129 12:07:51 json_config -- json_config/common.sh@10 -- # shift 00:04:45.129 12:07:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.129 12:07:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.129 12:07:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.129 12:07:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.129 12:07:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.129 12:07:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3445874 00:04:45.129 12:07:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.129 Waiting for target to run... 00:04:45.129 12:07:51 json_config -- json_config/common.sh@25 -- # waitforlisten 3445874 /var/tmp/spdk_tgt.sock 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@835 -- # '[' -z 3445874 ']' 00:04:45.129 12:07:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.129 12:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.388 [2024-12-10 12:07:52.006104] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:04:45.388 [2024-12-10 12:07:52.006201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445874 ] 00:04:45.647 [2024-12-10 12:07:52.332064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.647 [2024-12-10 12:07:52.429876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.214 12:07:52 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.214 12:07:52 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:46.214 12:07:52 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.214 00:04:46.214 12:07:52 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:46.214 12:07:52 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:46.214 12:07:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.214 12:07:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.214 12:07:52 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:46.214 12:07:52 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:46.214 12:07:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.214 12:07:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.214 12:07:52 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:46.214 12:07:52 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:46.214 12:07:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:50.423 12:07:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:50.424 12:07:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.424 12:07:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:50.424 12:07:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@54 -- # sort 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:50.424 12:07:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.424 12:07:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:50.424 12:07:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.424 12:07:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:50.424 12:07:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:50.424 MallocForNvmf0 00:04:50.424 12:07:56 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:50.424 12:07:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:50.424 MallocForNvmf1 00:04:50.424 12:07:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:50.424 12:07:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:50.683 [2024-12-10 12:07:57.338453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.683 12:07:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:50.683 12:07:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:50.942 12:07:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:50.942 12:07:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:50.942 12:07:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:50.942 12:07:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:51.201 12:07:57 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:51.201 12:07:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:51.459 [2024-12-10 12:07:58.064762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:51.459 12:07:58 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:51.459 12:07:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.459 12:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.459 12:07:58 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:51.459 12:07:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.459 12:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.459 12:07:58 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:51.459 12:07:58 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:51.459 12:07:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:51.718 MallocBdevForConfigChangeCheck 00:04:51.718 12:07:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:51.718 12:07:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.718 12:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.718 12:07:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:51.718 12:07:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.977 12:07:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:51.977 INFO: shutting down applications... 00:04:51.977 12:07:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:51.977 12:07:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:51.977 12:07:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:51.977 12:07:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:53.882 Calling clear_iscsi_subsystem 00:04:53.882 Calling clear_nvmf_subsystem 00:04:53.882 Calling clear_nbd_subsystem 00:04:53.882 Calling clear_ublk_subsystem 00:04:53.882 Calling clear_vhost_blk_subsystem 00:04:53.882 Calling clear_vhost_scsi_subsystem 00:04:53.882 Calling clear_bdev_subsystem 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@352 -- # break 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:53.882 12:08:00 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:53.882 12:08:00 json_config -- json_config/common.sh@31 -- # local app=target 00:04:53.882 12:08:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.882 12:08:00 json_config -- json_config/common.sh@35 -- # [[ -n 3445874 ]] 00:04:53.882 12:08:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3445874 00:04:53.882 12:08:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.882 12:08:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.882 12:08:00 json_config -- json_config/common.sh@41 -- # kill -0 3445874 00:04:53.882 12:08:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.449 12:08:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.449 12:08:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.449 12:08:01 json_config -- json_config/common.sh@41 -- # kill -0 3445874 00:04:54.449 12:08:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.016 12:08:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.016 12:08:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.016 12:08:01 json_config -- json_config/common.sh@41 -- # kill -0 3445874 00:04:55.016 12:08:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.016 12:08:01 json_config -- json_config/common.sh@43 -- # break 00:04:55.016 12:08:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.016 12:08:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.016 SPDK target shutdown done 00:04:55.016 12:08:01 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:55.016 INFO: relaunching applications... 00:04:55.016 12:08:01 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.016 12:08:01 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.016 12:08:01 json_config -- json_config/common.sh@10 -- # shift 00:04:55.016 12:08:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.016 12:08:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.016 12:08:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.017 12:08:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.017 12:08:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.017 12:08:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3447774 00:04:55.017 12:08:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.017 Waiting for target to run... 00:04:55.017 12:08:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.017 12:08:01 json_config -- json_config/common.sh@25 -- # waitforlisten 3447774 /var/tmp/spdk_tgt.sock 00:04:55.017 12:08:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 3447774 ']' 00:04:55.017 12:08:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.017 12:08:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.017 12:08:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.017 12:08:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.017 12:08:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.017 [2024-12-10 12:08:01.752838] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:04:55.017 [2024-12-10 12:08:01.752938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447774 ] 00:04:55.585 [2024-12-10 12:08:02.241901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.585 [2024-12-10 12:08:02.354312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.807 [2024-12-10 12:08:05.988722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.807 [2024-12-10 12:08:06.021078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:59.807 12:08:06 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.807 12:08:06 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:59.807 12:08:06 json_config -- json_config/common.sh@26 -- # echo '' 00:04:59.807 00:04:59.807 12:08:06 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:59.807 12:08:06 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:59.807 INFO: Checking if target configuration is the same... 00:04:59.807 12:08:06 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.807 12:08:06 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:59.807 12:08:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.807 + '[' 2 -ne 2 ']' 00:04:59.807 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:59.807 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:59.807 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:59.807 +++ basename /dev/fd/62 00:04:59.807 ++ mktemp /tmp/62.XXX 00:04:59.807 + tmp_file_1=/tmp/62.rgl 00:04:59.807 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.807 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:59.807 + tmp_file_2=/tmp/spdk_tgt_config.json.txw 00:04:59.807 + ret=0 00:04:59.807 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.807 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.807 + diff -u /tmp/62.rgl /tmp/spdk_tgt_config.json.txw 00:04:59.807 + echo 'INFO: JSON config files are the same' 00:04:59.807 INFO: JSON config files are the same 00:04:59.807 + rm /tmp/62.rgl /tmp/spdk_tgt_config.json.txw 00:04:59.807 + exit 0 00:04:59.807 12:08:06 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:59.807 12:08:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:59.807 INFO: changing configuration and checking if this can be detected... 00:04:59.807 12:08:06 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:59.807 12:08:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:00.133 12:08:06 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.133 12:08:06 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:00.133 12:08:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.133 + '[' 2 -ne 2 ']' 00:05:00.133 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:00.133 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:00.133 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:00.133 +++ basename /dev/fd/62 00:05:00.133 ++ mktemp /tmp/62.XXX 00:05:00.133 + tmp_file_1=/tmp/62.h8Q 00:05:00.133 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.133 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:00.133 + tmp_file_2=/tmp/spdk_tgt_config.json.qVT 00:05:00.133 + ret=0 00:05:00.133 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:00.404 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:00.404 + diff -u /tmp/62.h8Q /tmp/spdk_tgt_config.json.qVT 00:05:00.404 + ret=1 00:05:00.404 + echo '=== Start of file: /tmp/62.h8Q ===' 00:05:00.404 + cat /tmp/62.h8Q 00:05:00.404 + echo '=== End of file: /tmp/62.h8Q ===' 00:05:00.404 + echo '' 00:05:00.404 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qVT ===' 00:05:00.404 + cat /tmp/spdk_tgt_config.json.qVT 00:05:00.404 + echo '=== End of file: /tmp/spdk_tgt_config.json.qVT ===' 00:05:00.404 + echo '' 00:05:00.404 + rm /tmp/62.h8Q /tmp/spdk_tgt_config.json.qVT 00:05:00.404 + exit 1 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:00.404 INFO: configuration change detected. 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@324 -- # [[ -n 3447774 ]] 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 12:08:07 json_config -- json_config/json_config.sh@330 -- # killprocess 3447774 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@954 -- # '[' -z 3447774 ']' 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@958 -- # kill -0 3447774 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@959 -- # uname 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447774 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447774' 00:05:00.404 killing process with pid 3447774 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@973 -- # kill 3447774 00:05:00.404 12:08:07 json_config -- common/autotest_common.sh@978 -- # wait 3447774 00:05:02.940 12:08:09 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.940 12:08:09 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:02.940 12:08:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.940 12:08:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.940 12:08:09 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:02.940 12:08:09 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:02.940 INFO: Success 00:05:02.940 00:05:02.940 real 0m17.640s 00:05:02.940 user 0m18.091s 00:05:02.940 sys 0m2.758s 00:05:02.940 12:08:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.940 12:08:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.940 ************************************ 00:05:02.940 END TEST json_config 00:05:02.940 ************************************ 00:05:02.940 12:08:09 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:02.940 12:08:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.940 12:08:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.940 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.940 ************************************ 00:05:02.940 START TEST json_config_extra_key 00:05:02.940 ************************************ 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.940 12:08:09 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.940 --rc genhtml_branch_coverage=1 00:05:02.940 --rc genhtml_function_coverage=1 00:05:02.940 --rc genhtml_legend=1 00:05:02.940 --rc geninfo_all_blocks=1 00:05:02.940 --rc geninfo_unexecuted_blocks=1 00:05:02.940 00:05:02.940 ' 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.940 --rc genhtml_branch_coverage=1 00:05:02.940 --rc genhtml_function_coverage=1 00:05:02.940 --rc genhtml_legend=1 00:05:02.940 --rc geninfo_all_blocks=1 00:05:02.940 --rc geninfo_unexecuted_blocks=1 00:05:02.940 00:05:02.940 ' 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.940 --rc genhtml_branch_coverage=1 00:05:02.940 --rc genhtml_function_coverage=1 00:05:02.940 --rc genhtml_legend=1 00:05:02.940 --rc geninfo_all_blocks=1 00:05:02.940 --rc geninfo_unexecuted_blocks=1 00:05:02.940 00:05:02.940 ' 00:05:02.940 12:08:09 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.940 --rc genhtml_branch_coverage=1 00:05:02.940 --rc genhtml_function_coverage=1 00:05:02.940 --rc genhtml_legend=1 00:05:02.940 --rc geninfo_all_blocks=1 00:05:02.940 --rc geninfo_unexecuted_blocks=1 00:05:02.940 00:05:02.940 ' 00:05:02.940 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.940 12:08:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.941 12:08:09 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.941 12:08:09 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.941 12:08:09 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.941 12:08:09 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.941 12:08:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.941 12:08:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.941 12:08:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.941 12:08:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:02.941 12:08:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.941 12:08:09 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:02.941 INFO: launching applications... 00:05:02.941 12:08:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3449206 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.941 Waiting for target to run... 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3449206 /var/tmp/spdk_tgt.sock 00:05:02.941 12:08:09 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3449206 ']' 00:05:02.941 12:08:09 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.941 12:08:09 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:02.941 12:08:09 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.941 12:08:09 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.941 12:08:09 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.941 12:08:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.941 [2024-12-10 12:08:09.706301] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:02.941 [2024-12-10 12:08:09.706396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449206 ] 00:05:03.509 [2024-12-10 12:08:10.036055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.509 [2024-12-10 12:08:10.140331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.078 12:08:10 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.078 12:08:10 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:04.078 00:05:04.078 12:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:04.078 INFO: shutting down applications... 00:05:04.078 12:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3449206 ]] 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3449206 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3449206 00:05:04.078 12:08:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.646 12:08:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.646 12:08:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.646 12:08:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3449206 00:05:04.646 12:08:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.214 12:08:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.214 12:08:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.214 12:08:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3449206 00:05:05.214 12:08:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.783 12:08:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.783 12:08:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.783 12:08:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3449206 00:05:05.783 12:08:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.041 12:08:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.041 12:08:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.041 12:08:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3449206 00:05:06.041 12:08:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.609 12:08:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.609 12:08:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.609 12:08:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3449206 00:05:06.609 12:08:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.175 12:08:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.175 12:08:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.175 12:08:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3449206 00:05:07.175 12:08:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.175 12:08:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.175 12:08:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.175 12:08:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.175 SPDK target shutdown done 00:05:07.175 12:08:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.175 Success 00:05:07.175 00:05:07.175 real 0m4.418s 00:05:07.175 user 0m3.890s 00:05:07.175 sys 0m0.539s 00:05:07.175 12:08:13 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.175 12:08:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.175 ************************************ 00:05:07.175 END TEST json_config_extra_key 00:05:07.175 ************************************ 00:05:07.175 12:08:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.175 12:08:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.175 12:08:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.175 12:08:13 -- common/autotest_common.sh@10 -- # set +x 00:05:07.175 ************************************ 00:05:07.175 START TEST alias_rpc 00:05:07.175 ************************************ 00:05:07.175 12:08:13 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.438 * Looking for test storage... 00:05:07.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.438 12:08:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.438 --rc genhtml_branch_coverage=1 00:05:07.438 --rc genhtml_function_coverage=1 00:05:07.438 --rc genhtml_legend=1 00:05:07.438 --rc geninfo_all_blocks=1 00:05:07.438 --rc geninfo_unexecuted_blocks=1 00:05:07.438 00:05:07.438 ' 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.438 --rc genhtml_branch_coverage=1 00:05:07.438 --rc genhtml_function_coverage=1 00:05:07.438 --rc genhtml_legend=1 00:05:07.438 --rc geninfo_all_blocks=1 00:05:07.438 --rc geninfo_unexecuted_blocks=1 00:05:07.438 00:05:07.438 ' 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.438 --rc genhtml_branch_coverage=1 00:05:07.438 --rc genhtml_function_coverage=1 00:05:07.438 --rc genhtml_legend=1 00:05:07.438 --rc geninfo_all_blocks=1 00:05:07.438 --rc geninfo_unexecuted_blocks=1 00:05:07.438 00:05:07.438 ' 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.438 --rc genhtml_branch_coverage=1 00:05:07.438 --rc genhtml_function_coverage=1 00:05:07.438 --rc genhtml_legend=1 00:05:07.438 --rc geninfo_all_blocks=1 00:05:07.438 --rc geninfo_unexecuted_blocks=1 00:05:07.438 00:05:07.438 ' 00:05:07.438 12:08:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.438 12:08:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3449989 00:05:07.438 12:08:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.438 12:08:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3449989 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3449989 ']' 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.438 12:08:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.438 [2024-12-10 12:08:14.193875] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:07.438 [2024-12-10 12:08:14.193967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449989 ] 00:05:07.697 [2024-12-10 12:08:14.306440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.697 [2024-12-10 12:08:14.407609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.632 12:08:15 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.632 12:08:15 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:08.632 12:08:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:08.891 12:08:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3449989 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3449989 ']' 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3449989 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3449989 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3449989' 00:05:08.891 killing process with pid 3449989 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@973 -- # kill 3449989 00:05:08.891 12:08:15 alias_rpc -- common/autotest_common.sh@978 -- # wait 3449989 00:05:11.427 00:05:11.427 real 0m3.916s 00:05:11.427 user 0m3.929s 00:05:11.427 sys 0m0.571s 00:05:11.427 12:08:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.427 12:08:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.427 ************************************ 00:05:11.427 END TEST alias_rpc 00:05:11.427 ************************************ 00:05:11.427 12:08:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:11.427 12:08:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:11.427 12:08:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.427 12:08:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.427 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:05:11.427 ************************************ 00:05:11.427 START TEST spdkcli_tcp 00:05:11.427 ************************************ 00:05:11.427 12:08:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:11.427 * Looking for test storage... 00:05:11.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:11.427 12:08:17 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.427 12:08:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.427 12:08:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.427 12:08:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.427 --rc genhtml_branch_coverage=1 00:05:11.427 --rc genhtml_function_coverage=1 00:05:11.427 --rc genhtml_legend=1 00:05:11.427 --rc geninfo_all_blocks=1 00:05:11.427 --rc geninfo_unexecuted_blocks=1 00:05:11.427 00:05:11.427 ' 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.427 --rc genhtml_branch_coverage=1 00:05:11.427 --rc genhtml_function_coverage=1 00:05:11.427 --rc genhtml_legend=1 00:05:11.427 --rc geninfo_all_blocks=1 00:05:11.427 --rc geninfo_unexecuted_blocks=1 00:05:11.427 00:05:11.427 ' 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.427 --rc genhtml_branch_coverage=1 00:05:11.427 --rc genhtml_function_coverage=1 00:05:11.427 --rc genhtml_legend=1 00:05:11.427 --rc geninfo_all_blocks=1 00:05:11.427 --rc geninfo_unexecuted_blocks=1 00:05:11.427 00:05:11.427 ' 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.427 --rc genhtml_branch_coverage=1 00:05:11.427 --rc genhtml_function_coverage=1 00:05:11.427 --rc genhtml_legend=1 00:05:11.427 --rc geninfo_all_blocks=1 00:05:11.427 --rc geninfo_unexecuted_blocks=1 00:05:11.427 00:05:11.427 ' 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3450728 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3450728 00:05:11.427 12:08:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3450728 ']' 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.427 12:08:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.427 [2024-12-10 12:08:18.169354] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:11.427 [2024-12-10 12:08:18.169441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450728 ] 00:05:11.687 [2024-12-10 12:08:18.281113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.687 [2024-12-10 12:08:18.386607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.687 [2024-12-10 12:08:18.386617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.625 12:08:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.625 12:08:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:12.625 12:08:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:12.625 12:08:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3450953 00:05:12.625 12:08:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:12.625 [ 00:05:12.625 "bdev_malloc_delete", 00:05:12.625 "bdev_malloc_create", 00:05:12.625 "bdev_null_resize", 00:05:12.625 "bdev_null_delete", 00:05:12.625 "bdev_null_create", 00:05:12.625 "bdev_nvme_cuse_unregister", 00:05:12.625 "bdev_nvme_cuse_register", 00:05:12.625 "bdev_opal_new_user", 00:05:12.625 "bdev_opal_set_lock_state", 00:05:12.625 "bdev_opal_delete", 00:05:12.625 "bdev_opal_get_info", 00:05:12.625 "bdev_opal_create", 00:05:12.625 "bdev_nvme_opal_revert", 00:05:12.625 "bdev_nvme_opal_init", 00:05:12.625 "bdev_nvme_send_cmd", 00:05:12.625 "bdev_nvme_set_keys", 00:05:12.625 "bdev_nvme_get_path_iostat", 00:05:12.625 "bdev_nvme_get_mdns_discovery_info", 00:05:12.625 "bdev_nvme_stop_mdns_discovery", 00:05:12.625 "bdev_nvme_start_mdns_discovery", 00:05:12.625 "bdev_nvme_set_multipath_policy", 00:05:12.625 "bdev_nvme_set_preferred_path", 00:05:12.625 "bdev_nvme_get_io_paths", 00:05:12.625 "bdev_nvme_remove_error_injection", 00:05:12.625 "bdev_nvme_add_error_injection", 00:05:12.625 "bdev_nvme_get_discovery_info", 00:05:12.625 "bdev_nvme_stop_discovery", 00:05:12.625 "bdev_nvme_start_discovery", 00:05:12.625 "bdev_nvme_get_controller_health_info", 00:05:12.625 "bdev_nvme_disable_controller", 00:05:12.625 "bdev_nvme_enable_controller", 00:05:12.625 "bdev_nvme_reset_controller", 00:05:12.625 "bdev_nvme_get_transport_statistics", 00:05:12.625 "bdev_nvme_apply_firmware", 00:05:12.625 "bdev_nvme_detach_controller", 00:05:12.625 "bdev_nvme_get_controllers", 00:05:12.625 "bdev_nvme_attach_controller", 00:05:12.625 "bdev_nvme_set_hotplug", 00:05:12.625 "bdev_nvme_set_options", 00:05:12.625 "bdev_passthru_delete", 00:05:12.625 "bdev_passthru_create", 00:05:12.625 "bdev_lvol_set_parent_bdev", 00:05:12.625 "bdev_lvol_set_parent", 00:05:12.625 "bdev_lvol_check_shallow_copy", 00:05:12.625 "bdev_lvol_start_shallow_copy", 00:05:12.625 "bdev_lvol_grow_lvstore", 00:05:12.625 "bdev_lvol_get_lvols", 00:05:12.625 "bdev_lvol_get_lvstores", 00:05:12.625 "bdev_lvol_delete", 00:05:12.625 "bdev_lvol_set_read_only", 00:05:12.625 "bdev_lvol_resize", 00:05:12.625 "bdev_lvol_decouple_parent", 00:05:12.625 "bdev_lvol_inflate", 00:05:12.625 "bdev_lvol_rename", 00:05:12.625 "bdev_lvol_clone_bdev", 00:05:12.625 "bdev_lvol_clone", 00:05:12.625 "bdev_lvol_snapshot", 00:05:12.625 "bdev_lvol_create", 00:05:12.625 "bdev_lvol_delete_lvstore", 00:05:12.625 "bdev_lvol_rename_lvstore", 00:05:12.625 "bdev_lvol_create_lvstore", 00:05:12.625 "bdev_raid_set_options", 00:05:12.625 "bdev_raid_remove_base_bdev", 00:05:12.625 "bdev_raid_add_base_bdev", 00:05:12.625 "bdev_raid_delete", 00:05:12.625 "bdev_raid_create", 00:05:12.625 "bdev_raid_get_bdevs", 00:05:12.625 "bdev_error_inject_error", 00:05:12.625 "bdev_error_delete", 00:05:12.625 "bdev_error_create", 00:05:12.625 "bdev_split_delete", 00:05:12.625 "bdev_split_create", 00:05:12.625 "bdev_delay_delete", 00:05:12.625 "bdev_delay_create", 00:05:12.625 "bdev_delay_update_latency", 00:05:12.625 "bdev_zone_block_delete", 00:05:12.625 "bdev_zone_block_create", 00:05:12.625 "blobfs_create", 00:05:12.625 "blobfs_detect", 00:05:12.625 "blobfs_set_cache_size", 00:05:12.625 "bdev_aio_delete", 00:05:12.625 "bdev_aio_rescan", 00:05:12.625 "bdev_aio_create", 00:05:12.625 "bdev_ftl_set_property", 00:05:12.625 "bdev_ftl_get_properties", 00:05:12.625 "bdev_ftl_get_stats", 00:05:12.625 "bdev_ftl_unmap", 00:05:12.625 "bdev_ftl_unload", 00:05:12.625 "bdev_ftl_delete", 00:05:12.625 "bdev_ftl_load", 00:05:12.625 "bdev_ftl_create", 00:05:12.625 "bdev_virtio_attach_controller", 00:05:12.625 "bdev_virtio_scsi_get_devices", 00:05:12.625 "bdev_virtio_detach_controller", 00:05:12.625 "bdev_virtio_blk_set_hotplug", 00:05:12.625 "bdev_iscsi_delete", 00:05:12.625 "bdev_iscsi_create", 00:05:12.625 "bdev_iscsi_set_options", 00:05:12.625 "accel_error_inject_error", 00:05:12.625 "ioat_scan_accel_module", 00:05:12.625 "dsa_scan_accel_module", 00:05:12.625 "iaa_scan_accel_module", 00:05:12.625 "keyring_file_remove_key", 00:05:12.625 "keyring_file_add_key", 00:05:12.625 "keyring_linux_set_options", 00:05:12.625 "fsdev_aio_delete", 00:05:12.625 "fsdev_aio_create", 00:05:12.625 "iscsi_get_histogram", 00:05:12.625 "iscsi_enable_histogram", 00:05:12.625 "iscsi_set_options", 00:05:12.625 "iscsi_get_auth_groups", 00:05:12.625 "iscsi_auth_group_remove_secret", 00:05:12.625 "iscsi_auth_group_add_secret", 00:05:12.625 "iscsi_delete_auth_group", 00:05:12.625 "iscsi_create_auth_group", 00:05:12.625 "iscsi_set_discovery_auth", 00:05:12.625 "iscsi_get_options", 00:05:12.625 "iscsi_target_node_request_logout", 00:05:12.625 "iscsi_target_node_set_redirect", 00:05:12.625 "iscsi_target_node_set_auth", 00:05:12.625 "iscsi_target_node_add_lun", 00:05:12.625 "iscsi_get_stats", 00:05:12.625 "iscsi_get_connections", 00:05:12.625 "iscsi_portal_group_set_auth", 00:05:12.625 "iscsi_start_portal_group", 00:05:12.625 "iscsi_delete_portal_group", 00:05:12.625 "iscsi_create_portal_group", 00:05:12.625 "iscsi_get_portal_groups", 00:05:12.625 "iscsi_delete_target_node", 00:05:12.625 "iscsi_target_node_remove_pg_ig_maps", 00:05:12.625 "iscsi_target_node_add_pg_ig_maps", 00:05:12.625 "iscsi_create_target_node", 00:05:12.625 "iscsi_get_target_nodes", 00:05:12.625 "iscsi_delete_initiator_group", 00:05:12.625 "iscsi_initiator_group_remove_initiators", 00:05:12.625 "iscsi_initiator_group_add_initiators", 00:05:12.625 "iscsi_create_initiator_group", 00:05:12.625 "iscsi_get_initiator_groups", 00:05:12.625 "nvmf_set_crdt", 00:05:12.625 "nvmf_set_config", 00:05:12.625 "nvmf_set_max_subsystems", 00:05:12.625 "nvmf_stop_mdns_prr", 00:05:12.625 "nvmf_publish_mdns_prr", 00:05:12.625 "nvmf_subsystem_get_listeners", 00:05:12.625 "nvmf_subsystem_get_qpairs", 00:05:12.625 "nvmf_subsystem_get_controllers", 00:05:12.625 "nvmf_get_stats", 00:05:12.625 "nvmf_get_transports", 00:05:12.625 "nvmf_create_transport", 00:05:12.625 "nvmf_get_targets", 00:05:12.625 "nvmf_delete_target", 00:05:12.625 "nvmf_create_target", 00:05:12.625 "nvmf_subsystem_allow_any_host", 00:05:12.625 "nvmf_subsystem_set_keys", 00:05:12.625 "nvmf_subsystem_remove_host", 00:05:12.625 "nvmf_subsystem_add_host", 00:05:12.625 "nvmf_ns_remove_host", 00:05:12.625 "nvmf_ns_add_host", 00:05:12.625 "nvmf_subsystem_remove_ns", 00:05:12.625 "nvmf_subsystem_set_ns_ana_group", 00:05:12.625 "nvmf_subsystem_add_ns", 00:05:12.625 "nvmf_subsystem_listener_set_ana_state", 00:05:12.625 "nvmf_discovery_get_referrals", 00:05:12.625 "nvmf_discovery_remove_referral", 00:05:12.625 "nvmf_discovery_add_referral", 00:05:12.625 "nvmf_subsystem_remove_listener", 00:05:12.625 "nvmf_subsystem_add_listener", 00:05:12.625 "nvmf_delete_subsystem", 00:05:12.625 "nvmf_create_subsystem", 00:05:12.625 "nvmf_get_subsystems", 00:05:12.625 "env_dpdk_get_mem_stats", 00:05:12.625 "nbd_get_disks", 00:05:12.625 "nbd_stop_disk", 00:05:12.625 "nbd_start_disk", 00:05:12.625 "ublk_recover_disk", 00:05:12.625 "ublk_get_disks", 00:05:12.625 "ublk_stop_disk", 00:05:12.625 "ublk_start_disk", 00:05:12.625 "ublk_destroy_target", 00:05:12.625 "ublk_create_target", 00:05:12.625 "virtio_blk_create_transport", 00:05:12.625 "virtio_blk_get_transports", 00:05:12.625 "vhost_controller_set_coalescing", 00:05:12.625 "vhost_get_controllers", 00:05:12.625 "vhost_delete_controller", 00:05:12.625 "vhost_create_blk_controller", 00:05:12.625 "vhost_scsi_controller_remove_target", 00:05:12.625 "vhost_scsi_controller_add_target", 00:05:12.625 "vhost_start_scsi_controller", 00:05:12.625 "vhost_create_scsi_controller", 00:05:12.625 "thread_set_cpumask", 00:05:12.625 "scheduler_set_options", 00:05:12.626 "framework_get_governor", 00:05:12.626 "framework_get_scheduler", 00:05:12.626 "framework_set_scheduler", 00:05:12.626 "framework_get_reactors", 00:05:12.626 "thread_get_io_channels", 00:05:12.626 "thread_get_pollers", 00:05:12.626 "thread_get_stats", 00:05:12.626 "framework_monitor_context_switch", 00:05:12.626 "spdk_kill_instance", 00:05:12.626 "log_enable_timestamps", 00:05:12.626 "log_get_flags", 00:05:12.626 "log_clear_flag", 00:05:12.626 "log_set_flag", 00:05:12.626 "log_get_level", 00:05:12.626 "log_set_level", 00:05:12.626 "log_get_print_level", 00:05:12.626 "log_set_print_level", 00:05:12.626 "framework_enable_cpumask_locks", 00:05:12.626 "framework_disable_cpumask_locks", 00:05:12.626 "framework_wait_init", 00:05:12.626 "framework_start_init", 00:05:12.626 "scsi_get_devices", 00:05:12.626 "bdev_get_histogram", 00:05:12.626 "bdev_enable_histogram", 00:05:12.626 "bdev_set_qos_limit", 00:05:12.626 "bdev_set_qd_sampling_period", 00:05:12.626 "bdev_get_bdevs", 00:05:12.626 "bdev_reset_iostat", 00:05:12.626 "bdev_get_iostat", 00:05:12.626 "bdev_examine", 00:05:12.626 "bdev_wait_for_examine", 00:05:12.626 "bdev_set_options", 00:05:12.626 "accel_get_stats", 00:05:12.626 "accel_set_options", 00:05:12.626 "accel_set_driver", 00:05:12.626 "accel_crypto_key_destroy", 00:05:12.626 "accel_crypto_keys_get", 00:05:12.626 "accel_crypto_key_create", 00:05:12.626 "accel_assign_opc", 00:05:12.626 "accel_get_module_info", 00:05:12.626 "accel_get_opc_assignments", 00:05:12.626 "vmd_rescan", 00:05:12.626 "vmd_remove_device", 00:05:12.626 "vmd_enable", 00:05:12.626 "sock_get_default_impl", 00:05:12.626 "sock_set_default_impl", 00:05:12.626 "sock_impl_set_options", 00:05:12.626 "sock_impl_get_options", 00:05:12.626 "iobuf_get_stats", 00:05:12.626 "iobuf_set_options", 00:05:12.626 "keyring_get_keys", 00:05:12.626 "framework_get_pci_devices", 00:05:12.626 "framework_get_config", 00:05:12.626 "framework_get_subsystems", 00:05:12.626 "fsdev_set_opts", 00:05:12.626 "fsdev_get_opts", 00:05:12.626 "trace_get_info", 00:05:12.626 "trace_get_tpoint_group_mask", 00:05:12.626 "trace_disable_tpoint_group", 00:05:12.626 "trace_enable_tpoint_group", 00:05:12.626 "trace_clear_tpoint_mask", 00:05:12.626 "trace_set_tpoint_mask", 00:05:12.626 "notify_get_notifications", 00:05:12.626 "notify_get_types", 00:05:12.626 "spdk_get_version", 00:05:12.626 "rpc_get_methods" 00:05:12.626 ] 00:05:12.626 12:08:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:12.626 12:08:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.626 12:08:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.885 12:08:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:12.885 12:08:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3450728 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3450728 ']' 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3450728 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3450728 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3450728' 00:05:12.885 killing process with pid 3450728 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3450728 00:05:12.885 12:08:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3450728 00:05:15.418 00:05:15.418 real 0m3.988s 00:05:15.418 user 0m7.281s 00:05:15.418 sys 0m0.587s 00:05:15.418 12:08:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.418 12:08:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.418 ************************************ 00:05:15.418 END TEST spdkcli_tcp 00:05:15.418 ************************************ 00:05:15.418 12:08:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.418 12:08:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.418 12:08:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.418 12:08:21 -- common/autotest_common.sh@10 -- # set +x 00:05:15.418 ************************************ 00:05:15.418 START TEST dpdk_mem_utility 00:05:15.418 ************************************ 00:05:15.418 12:08:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.418 * Looking for test storage... 00:05:15.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:15.418 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.418 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.418 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.418 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:15.418 12:08:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.419 12:08:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:15.419 12:08:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.419 12:08:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.419 12:08:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.419 12:08:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.419 --rc genhtml_branch_coverage=1 00:05:15.419 --rc genhtml_function_coverage=1 00:05:15.419 --rc genhtml_legend=1 00:05:15.419 --rc geninfo_all_blocks=1 00:05:15.419 --rc geninfo_unexecuted_blocks=1 00:05:15.419 00:05:15.419 ' 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.419 --rc genhtml_branch_coverage=1 00:05:15.419 --rc genhtml_function_coverage=1 00:05:15.419 --rc genhtml_legend=1 00:05:15.419 --rc geninfo_all_blocks=1 00:05:15.419 --rc geninfo_unexecuted_blocks=1 00:05:15.419 00:05:15.419 ' 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.419 --rc genhtml_branch_coverage=1 00:05:15.419 --rc genhtml_function_coverage=1 00:05:15.419 --rc genhtml_legend=1 00:05:15.419 --rc geninfo_all_blocks=1 00:05:15.419 --rc geninfo_unexecuted_blocks=1 00:05:15.419 00:05:15.419 ' 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.419 --rc genhtml_branch_coverage=1 00:05:15.419 --rc genhtml_function_coverage=1 00:05:15.419 --rc genhtml_legend=1 00:05:15.419 --rc geninfo_all_blocks=1 00:05:15.419 --rc geninfo_unexecuted_blocks=1 00:05:15.419 00:05:15.419 ' 00:05:15.419 12:08:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.419 12:08:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3451476 00:05:15.419 12:08:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3451476 00:05:15.419 12:08:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3451476 ']' 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.419 12:08:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.419 [2024-12-10 12:08:22.226383] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:15.419 [2024-12-10 12:08:22.226485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451476 ] 00:05:15.675 [2024-12-10 12:08:22.336803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.675 [2024-12-10 12:08:22.442955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.610 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.610 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:16.610 12:08:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.610 12:08:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.610 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.610 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.610 { 00:05:16.610 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.610 } 00:05:16.610 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.610 12:08:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:16.610 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:16.610 1 heaps totaling size 824.000000 MiB 00:05:16.610 size: 824.000000 MiB heap id: 0 00:05:16.610 end heaps---------- 00:05:16.610 9 mempools totaling size 603.782043 MiB 00:05:16.610 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.610 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.610 size: 100.555481 MiB name: bdev_io_3451476 00:05:16.610 size: 50.003479 MiB name: msgpool_3451476 00:05:16.610 size: 36.509338 MiB name: fsdev_io_3451476 00:05:16.610 size: 21.763794 MiB name: PDU_Pool 00:05:16.610 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.610 size: 4.133484 MiB name: evtpool_3451476 00:05:16.610 size: 0.026123 MiB name: Session_Pool 00:05:16.610 end mempools------- 00:05:16.610 6 memzones totaling size 4.142822 MiB 00:05:16.610 size: 1.000366 MiB name: RG_ring_0_3451476 00:05:16.610 size: 1.000366 MiB name: RG_ring_1_3451476 00:05:16.610 size: 1.000366 MiB name: RG_ring_4_3451476 00:05:16.610 size: 1.000366 MiB name: RG_ring_5_3451476 00:05:16.610 size: 0.125366 MiB name: RG_ring_2_3451476 00:05:16.610 size: 0.015991 MiB name: RG_ring_3_3451476 00:05:16.610 end memzones------- 00:05:16.610 12:08:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.610 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:16.610 list of free elements. size: 16.847595 MiB 00:05:16.610 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:16.610 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:16.610 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:16.610 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:16.610 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:16.610 element at address: 0x200019a00000 with size: 0.999329 MiB 00:05:16.610 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:16.610 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:16.610 element at address: 0x200019200000 with size: 0.959900 MiB 00:05:16.610 element at address: 0x200019d00040 with size: 0.937256 MiB 00:05:16.610 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:16.610 element at address: 0x20001b400000 with size: 0.583191 MiB 00:05:16.610 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:16.610 element at address: 0x200019600000 with size: 0.491150 MiB 00:05:16.610 element at address: 0x200019e00000 with size: 0.485657 MiB 00:05:16.610 element at address: 0x200012c00000 with size: 0.436157 MiB 00:05:16.610 element at address: 0x200028800000 with size: 0.411072 MiB 00:05:16.610 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:16.610 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:16.610 list of standard malloc elements. size: 199.221497 MiB 00:05:16.610 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:16.610 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:16.610 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:16.610 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:16.610 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:16.610 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:16.610 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:16.610 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:16.610 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:16.610 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:16.610 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:16.610 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:16.610 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:16.610 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:16.610 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:16.610 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:16.610 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:16.610 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:16.610 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:16.610 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:16.610 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:16.610 list of memzone associated elements. size: 607.930908 MiB 00:05:16.610 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:16.610 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.610 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:16.610 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.610 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:16.610 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3451476_0 00:05:16.610 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:16.610 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3451476_0 00:05:16.610 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:16.610 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3451476_0 00:05:16.610 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:16.610 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.610 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:16.610 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.610 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:16.610 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3451476_0 00:05:16.610 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:16.610 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3451476 00:05:16.610 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:16.610 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3451476 00:05:16.610 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:16.610 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.610 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:16.610 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.610 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:16.610 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.611 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:16.611 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.611 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:16.611 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3451476 00:05:16.611 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:16.611 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3451476 00:05:16.611 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:16.611 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3451476 00:05:16.611 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:16.611 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3451476 00:05:16.611 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:16.611 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3451476 00:05:16.611 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:16.611 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3451476 00:05:16.611 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:05:16.611 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.611 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:05:16.611 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.611 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:05:16.611 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.611 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:16.611 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3451476 00:05:16.611 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:16.611 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3451476 00:05:16.611 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:05:16.611 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.611 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:05:16.611 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.611 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:16.611 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3451476 00:05:16.611 element at address: 0x20002886f540 with size: 0.002502 MiB 00:05:16.611 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.611 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:16.611 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3451476 00:05:16.611 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:16.611 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3451476 00:05:16.611 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:16.611 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3451476 00:05:16.611 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:16.611 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.611 12:08:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.611 12:08:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3451476 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3451476 ']' 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3451476 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451476 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451476' 00:05:16.611 killing process with pid 3451476 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3451476 00:05:16.611 12:08:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3451476 00:05:19.142 00:05:19.142 real 0m3.737s 00:05:19.142 user 0m3.695s 00:05:19.142 sys 0m0.549s 00:05:19.142 12:08:25 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.142 12:08:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.142 ************************************ 00:05:19.142 END TEST dpdk_mem_utility 00:05:19.142 ************************************ 00:05:19.142 12:08:25 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.142 12:08:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.142 12:08:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.142 12:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:19.142 ************************************ 00:05:19.142 START TEST event 00:05:19.142 ************************************ 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.142 * Looking for test storage... 00:05:19.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.142 12:08:25 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.142 12:08:25 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.142 12:08:25 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.142 12:08:25 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.142 12:08:25 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.142 12:08:25 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.142 12:08:25 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.142 12:08:25 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.142 12:08:25 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.142 12:08:25 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.142 12:08:25 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.142 12:08:25 event -- scripts/common.sh@344 -- # case "$op" in 00:05:19.142 12:08:25 event -- scripts/common.sh@345 -- # : 1 00:05:19.142 12:08:25 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.142 12:08:25 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.142 12:08:25 event -- scripts/common.sh@365 -- # decimal 1 00:05:19.142 12:08:25 event -- scripts/common.sh@353 -- # local d=1 00:05:19.142 12:08:25 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.142 12:08:25 event -- scripts/common.sh@355 -- # echo 1 00:05:19.142 12:08:25 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.142 12:08:25 event -- scripts/common.sh@366 -- # decimal 2 00:05:19.142 12:08:25 event -- scripts/common.sh@353 -- # local d=2 00:05:19.142 12:08:25 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.142 12:08:25 event -- scripts/common.sh@355 -- # echo 2 00:05:19.142 12:08:25 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.142 12:08:25 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.142 12:08:25 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.142 12:08:25 event -- scripts/common.sh@368 -- # return 0 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.142 --rc genhtml_branch_coverage=1 00:05:19.142 --rc genhtml_function_coverage=1 00:05:19.142 --rc genhtml_legend=1 00:05:19.142 --rc geninfo_all_blocks=1 00:05:19.142 --rc geninfo_unexecuted_blocks=1 00:05:19.142 00:05:19.142 ' 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.142 --rc genhtml_branch_coverage=1 00:05:19.142 --rc genhtml_function_coverage=1 00:05:19.142 --rc genhtml_legend=1 00:05:19.142 --rc geninfo_all_blocks=1 00:05:19.142 --rc geninfo_unexecuted_blocks=1 00:05:19.142 00:05:19.142 ' 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.142 --rc genhtml_branch_coverage=1 00:05:19.142 --rc genhtml_function_coverage=1 00:05:19.142 --rc genhtml_legend=1 00:05:19.142 --rc geninfo_all_blocks=1 00:05:19.142 --rc geninfo_unexecuted_blocks=1 00:05:19.142 00:05:19.142 ' 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.142 --rc genhtml_branch_coverage=1 00:05:19.142 --rc genhtml_function_coverage=1 00:05:19.142 --rc genhtml_legend=1 00:05:19.142 --rc geninfo_all_blocks=1 00:05:19.142 --rc geninfo_unexecuted_blocks=1 00:05:19.142 00:05:19.142 ' 00:05:19.142 12:08:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:19.142 12:08:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:19.142 12:08:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:19.142 12:08:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.142 12:08:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.400 ************************************ 00:05:19.400 START TEST event_perf 00:05:19.400 ************************************ 00:05:19.400 12:08:25 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.400 Running I/O for 1 seconds...[2024-12-10 12:08:26.010831] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:19.400 [2024-12-10 12:08:26.010902] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452204 ] 00:05:19.401 [2024-12-10 12:08:26.121201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.659 [2024-12-10 12:08:26.226401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.659 [2024-12-10 12:08:26.226474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.659 [2024-12-10 12:08:26.226533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.659 [2024-12-10 12:08:26.226544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.032 Running I/O for 1 seconds... 00:05:21.032 lcore 0: 207037 00:05:21.032 lcore 1: 207038 00:05:21.032 lcore 2: 207037 00:05:21.032 lcore 3: 207037 00:05:21.032 done. 00:05:21.032 00:05:21.032 real 0m1.477s 00:05:21.032 user 0m4.346s 00:05:21.032 sys 0m0.126s 00:05:21.032 12:08:27 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.032 12:08:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.032 ************************************ 00:05:21.032 END TEST event_perf 00:05:21.032 ************************************ 00:05:21.032 12:08:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.032 12:08:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:21.032 12:08:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.032 12:08:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.032 ************************************ 00:05:21.032 START TEST event_reactor 00:05:21.032 ************************************ 00:05:21.032 12:08:27 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.032 [2024-12-10 12:08:27.557343] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:21.032 [2024-12-10 12:08:27.557417] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452465 ] 00:05:21.032 [2024-12-10 12:08:27.664633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.032 [2024-12-10 12:08:27.766599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.407 test_start 00:05:22.407 oneshot 00:05:22.407 tick 100 00:05:22.407 tick 100 00:05:22.407 tick 250 00:05:22.407 tick 100 00:05:22.407 tick 100 00:05:22.407 tick 100 00:05:22.407 tick 250 00:05:22.407 tick 500 00:05:22.407 tick 100 00:05:22.407 tick 100 00:05:22.407 tick 250 00:05:22.407 tick 100 00:05:22.407 tick 100 00:05:22.407 test_end 00:05:22.407 00:05:22.407 real 0m1.458s 00:05:22.407 user 0m1.339s 00:05:22.407 sys 0m0.113s 00:05:22.407 12:08:28 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.407 12:08:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.407 ************************************ 00:05:22.407 END TEST event_reactor 00:05:22.407 ************************************ 00:05:22.407 12:08:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.407 12:08:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:22.407 12:08:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.407 12:08:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.407 ************************************ 00:05:22.407 START TEST event_reactor_perf 00:05:22.407 ************************************ 00:05:22.407 12:08:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.407 [2024-12-10 12:08:29.076335] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:22.407 [2024-12-10 12:08:29.076421] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452706 ] 00:05:22.407 [2024-12-10 12:08:29.186153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.666 [2024-12-10 12:08:29.291244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.040 test_start 00:05:24.040 test_end 00:05:24.040 Performance: 383510 events per second 00:05:24.040 00:05:24.040 real 0m1.473s 00:05:24.040 user 0m1.349s 00:05:24.040 sys 0m0.117s 00:05:24.040 12:08:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.040 12:08:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.040 ************************************ 00:05:24.040 END TEST event_reactor_perf 00:05:24.040 ************************************ 00:05:24.040 12:08:30 event -- event/event.sh@49 -- # uname -s 00:05:24.040 12:08:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.040 12:08:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.040 12:08:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.040 12:08:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.040 12:08:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.040 ************************************ 00:05:24.040 START TEST event_scheduler 00:05:24.040 ************************************ 00:05:24.040 12:08:30 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.040 * Looking for test storage... 00:05:24.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:24.040 12:08:30 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.040 12:08:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.040 12:08:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.040 12:08:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.041 12:08:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.041 --rc genhtml_branch_coverage=1 00:05:24.041 --rc genhtml_function_coverage=1 00:05:24.041 --rc genhtml_legend=1 00:05:24.041 --rc geninfo_all_blocks=1 00:05:24.041 --rc geninfo_unexecuted_blocks=1 00:05:24.041 00:05:24.041 ' 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.041 --rc genhtml_branch_coverage=1 00:05:24.041 --rc genhtml_function_coverage=1 00:05:24.041 --rc genhtml_legend=1 00:05:24.041 --rc geninfo_all_blocks=1 00:05:24.041 --rc geninfo_unexecuted_blocks=1 00:05:24.041 00:05:24.041 ' 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.041 --rc genhtml_branch_coverage=1 00:05:24.041 --rc genhtml_function_coverage=1 00:05:24.041 --rc genhtml_legend=1 00:05:24.041 --rc geninfo_all_blocks=1 00:05:24.041 --rc geninfo_unexecuted_blocks=1 00:05:24.041 00:05:24.041 ' 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.041 --rc genhtml_branch_coverage=1 00:05:24.041 --rc genhtml_function_coverage=1 00:05:24.041 --rc genhtml_legend=1 00:05:24.041 --rc geninfo_all_blocks=1 00:05:24.041 --rc geninfo_unexecuted_blocks=1 00:05:24.041 00:05:24.041 ' 00:05:24.041 12:08:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.041 12:08:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3452995 00:05:24.041 12:08:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.041 12:08:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.041 12:08:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3452995 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3452995 ']' 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.041 12:08:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.041 [2024-12-10 12:08:30.817724] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:24.041 [2024-12-10 12:08:30.817815] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452995 ] 00:05:24.300 [2024-12-10 12:08:30.926417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.300 [2024-12-10 12:08:31.039311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.300 [2024-12-10 12:08:31.039381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.300 [2024-12-10 12:08:31.039437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.300 [2024-12-10 12:08:31.039459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.865 12:08:31 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.865 12:08:31 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:24.865 12:08:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.865 12:08:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.865 12:08:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.865 [2024-12-10 12:08:31.633838] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:24.865 [2024-12-10 12:08:31.633863] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:24.865 [2024-12-10 12:08:31.633881] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.865 [2024-12-10 12:08:31.633890] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.865 [2024-12-10 12:08:31.633902] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.865 12:08:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.865 12:08:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.865 12:08:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.865 12:08:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.123 [2024-12-10 12:08:31.947990] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.123 12:08:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.382 12:08:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.382 12:08:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.382 12:08:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 ************************************ 00:05:25.382 START TEST scheduler_create_thread 00:05:25.382 ************************************ 00:05:25.382 12:08:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:25.382 12:08:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.382 12:08:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 2 00:05:25.382 12:08:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.382 12:08:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 3 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 4 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 5 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 6 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 7 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 8 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 9 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 10 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.382 12:08:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.758 12:08:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.758 12:08:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:26.758 12:08:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:26.758 12:08:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.758 12:08:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.130 12:08:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.130 00:05:28.130 real 0m2.626s 00:05:28.130 user 0m0.022s 00:05:28.130 sys 0m0.007s 00:05:28.130 12:08:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.130 12:08:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.130 ************************************ 00:05:28.130 END TEST scheduler_create_thread 00:05:28.130 ************************************ 00:05:28.130 12:08:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.130 12:08:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3452995 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3452995 ']' 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3452995 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3452995 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3452995' 00:05:28.130 killing process with pid 3452995 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3452995 00:05:28.130 12:08:34 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3452995 00:05:28.390 [2024-12-10 12:08:35.088664] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:29.765 00:05:29.765 real 0m5.639s 00:05:29.765 user 0m9.974s 00:05:29.765 sys 0m0.464s 00:05:29.765 12:08:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.765 12:08:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.765 ************************************ 00:05:29.765 END TEST event_scheduler 00:05:29.765 ************************************ 00:05:29.765 12:08:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:29.765 12:08:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:29.765 12:08:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.765 12:08:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.765 12:08:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.765 ************************************ 00:05:29.765 START TEST app_repeat 00:05:29.765 ************************************ 00:05:29.765 12:08:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3453950 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3453950' 00:05:29.765 Process app_repeat pid: 3453950 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.765 spdk_app_start Round 0 00:05:29.765 12:08:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3453950 /var/tmp/spdk-nbd.sock 00:05:29.765 12:08:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3453950 ']' 00:05:29.765 12:08:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.765 12:08:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.765 12:08:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.765 12:08:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.765 12:08:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.765 [2024-12-10 12:08:36.332827] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:29.765 [2024-12-10 12:08:36.332925] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453950 ] 00:05:29.765 [2024-12-10 12:08:36.443798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.765 [2024-12-10 12:08:36.555313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.765 [2024-12-10 12:08:36.555323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.700 12:08:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.700 12:08:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.700 12:08:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.700 Malloc0 00:05:30.700 12:08:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.958 Malloc1 00:05:30.958 12:08:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.958 12:08:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.216 /dev/nbd0 00:05:31.216 12:08:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.216 12:08:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.216 1+0 records in 00:05:31.216 1+0 records out 00:05:31.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387723 s, 10.6 MB/s 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:31.216 12:08:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:31.216 12:08:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.216 12:08:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.216 12:08:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.474 /dev/nbd1 00:05:31.474 12:08:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.474 12:08:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.474 1+0 records in 00:05:31.474 1+0 records out 00:05:31.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241307 s, 17.0 MB/s 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:31.474 12:08:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:31.474 12:08:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.474 12:08:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.474 12:08:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.474 12:08:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.474 12:08:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.733 { 00:05:31.733 "nbd_device": "/dev/nbd0", 00:05:31.733 "bdev_name": "Malloc0" 00:05:31.733 }, 00:05:31.733 { 00:05:31.733 "nbd_device": "/dev/nbd1", 00:05:31.733 "bdev_name": "Malloc1" 00:05:31.733 } 00:05:31.733 ]' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.733 { 00:05:31.733 "nbd_device": "/dev/nbd0", 00:05:31.733 "bdev_name": "Malloc0" 00:05:31.733 }, 00:05:31.733 { 00:05:31.733 "nbd_device": "/dev/nbd1", 00:05:31.733 "bdev_name": "Malloc1" 00:05:31.733 } 00:05:31.733 ]' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.733 /dev/nbd1' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.733 /dev/nbd1' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.733 256+0 records in 00:05:31.733 256+0 records out 00:05:31.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994032 s, 105 MB/s 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.733 256+0 records in 00:05:31.733 256+0 records out 00:05:31.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155016 s, 67.6 MB/s 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.733 256+0 records in 00:05:31.733 256+0 records out 00:05:31.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193028 s, 54.3 MB/s 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.733 12:08:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.991 12:08:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.249 12:08:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.507 12:08:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.507 12:08:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.508 12:08:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.508 12:08:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.765 12:08:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.140 [2024-12-10 12:08:40.741275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.140 [2024-12-10 12:08:40.842057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.140 [2024-12-10 12:08:40.842058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.398 [2024-12-10 12:08:41.034073] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.398 [2024-12-10 12:08:41.034123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.847 12:08:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.847 12:08:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.847 spdk_app_start Round 1 00:05:35.847 12:08:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3453950 /var/tmp/spdk-nbd.sock 00:05:35.847 12:08:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3453950 ']' 00:05:35.847 12:08:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.847 12:08:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.847 12:08:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.847 12:08:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.847 12:08:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.105 12:08:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.105 12:08:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:36.105 12:08:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.363 Malloc0 00:05:36.363 12:08:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.363 Malloc1 00:05:36.621 12:08:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.621 /dev/nbd0 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.621 12:08:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.621 12:08:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.621 1+0 records in 00:05:36.621 1+0 records out 00:05:36.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211755 s, 19.3 MB/s 00:05:36.878 12:08:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.878 12:08:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.878 12:08:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.878 12:08:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.878 12:08:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.878 12:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.878 12:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.879 12:08:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.879 /dev/nbd1 00:05:36.879 12:08:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.879 12:08:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.879 1+0 records in 00:05:36.879 1+0 records out 00:05:36.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197841 s, 20.7 MB/s 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.879 12:08:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.879 12:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.879 12:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.879 12:08:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.879 12:08:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.879 12:08:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.136 { 00:05:37.136 "nbd_device": "/dev/nbd0", 00:05:37.136 "bdev_name": "Malloc0" 00:05:37.136 }, 00:05:37.136 { 00:05:37.136 "nbd_device": "/dev/nbd1", 00:05:37.136 "bdev_name": "Malloc1" 00:05:37.136 } 00:05:37.136 ]' 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.136 { 00:05:37.136 "nbd_device": "/dev/nbd0", 00:05:37.136 "bdev_name": "Malloc0" 00:05:37.136 }, 00:05:37.136 { 00:05:37.136 "nbd_device": "/dev/nbd1", 00:05:37.136 "bdev_name": "Malloc1" 00:05:37.136 } 00:05:37.136 ]' 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.136 /dev/nbd1' 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.136 /dev/nbd1' 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.136 256+0 records in 00:05:37.136 256+0 records out 00:05:37.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00971964 s, 108 MB/s 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.136 256+0 records in 00:05:37.136 256+0 records out 00:05:37.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166626 s, 62.9 MB/s 00:05:37.136 12:08:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.137 12:08:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.394 256+0 records in 00:05:37.394 256+0 records out 00:05:37.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188536 s, 55.6 MB/s 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.394 12:08:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.394 12:08:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.651 12:08:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.909 12:08:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.909 12:08:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.478 12:08:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.857 [2024-12-10 12:08:46.249549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.857 [2024-12-10 12:08:46.351399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.857 [2024-12-10 12:08:46.351406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.857 [2024-12-10 12:08:46.540696] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.857 [2024-12-10 12:08:46.540744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.229 12:08:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.229 12:08:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:41.229 spdk_app_start Round 2 00:05:41.229 12:08:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3453950 /var/tmp/spdk-nbd.sock 00:05:41.229 12:08:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3453950 ']' 00:05:41.229 12:08:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.229 12:08:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.229 12:08:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.229 12:08:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.229 12:08:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.486 12:08:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.486 12:08:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:41.487 12:08:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.744 Malloc0 00:05:41.744 12:08:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.002 Malloc1 00:05:42.002 12:08:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.002 12:08:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.260 /dev/nbd0 00:05:42.260 12:08:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.260 12:08:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.260 1+0 records in 00:05:42.260 1+0 records out 00:05:42.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018262 s, 22.4 MB/s 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.260 12:08:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.260 12:08:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.260 12:08:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.260 12:08:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.518 /dev/nbd1 00:05:42.518 12:08:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.518 12:08:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.518 1+0 records in 00:05:42.518 1+0 records out 00:05:42.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156634 s, 26.2 MB/s 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.518 12:08:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.518 12:08:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.518 12:08:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.518 12:08:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.518 12:08:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.518 12:08:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.776 { 00:05:42.776 "nbd_device": "/dev/nbd0", 00:05:42.776 "bdev_name": "Malloc0" 00:05:42.776 }, 00:05:42.776 { 00:05:42.776 "nbd_device": "/dev/nbd1", 00:05:42.776 "bdev_name": "Malloc1" 00:05:42.776 } 00:05:42.776 ]' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.776 { 00:05:42.776 "nbd_device": "/dev/nbd0", 00:05:42.776 "bdev_name": "Malloc0" 00:05:42.776 }, 00:05:42.776 { 00:05:42.776 "nbd_device": "/dev/nbd1", 00:05:42.776 "bdev_name": "Malloc1" 00:05:42.776 } 00:05:42.776 ]' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.776 /dev/nbd1' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.776 /dev/nbd1' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.776 256+0 records in 00:05:42.776 256+0 records out 00:05:42.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100453 s, 104 MB/s 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.776 256+0 records in 00:05:42.776 256+0 records out 00:05:42.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157229 s, 66.7 MB/s 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.776 256+0 records in 00:05:42.776 256+0 records out 00:05:42.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185627 s, 56.5 MB/s 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.776 12:08:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.034 12:08:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.292 12:08:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.559 12:08:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.559 12:08:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.875 12:08:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.260 [2024-12-10 12:08:51.738993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.260 [2024-12-10 12:08:51.840397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.260 [2024-12-10 12:08:51.840398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.260 [2024-12-10 12:08:52.034701] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.260 [2024-12-10 12:08:52.034744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.159 12:08:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3453950 /var/tmp/spdk-nbd.sock 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3453950 ']' 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.159 12:08:53 event.app_repeat -- event/event.sh@39 -- # killprocess 3453950 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3453950 ']' 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3453950 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3453950 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3453950' 00:05:47.159 killing process with pid 3453950 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3453950 00:05:47.159 12:08:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3453950 00:05:48.093 spdk_app_start is called in Round 0. 00:05:48.093 Shutdown signal received, stop current app iteration 00:05:48.093 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:05:48.093 spdk_app_start is called in Round 1. 00:05:48.093 Shutdown signal received, stop current app iteration 00:05:48.093 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:05:48.093 spdk_app_start is called in Round 2. 00:05:48.093 Shutdown signal received, stop current app iteration 00:05:48.093 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:05:48.093 spdk_app_start is called in Round 3. 00:05:48.093 Shutdown signal received, stop current app iteration 00:05:48.093 12:08:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:48.093 12:08:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:48.093 00:05:48.093 real 0m18.501s 00:05:48.093 user 0m39.101s 00:05:48.093 sys 0m2.602s 00:05:48.093 12:08:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.093 12:08:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.093 ************************************ 00:05:48.093 END TEST app_repeat 00:05:48.093 ************************************ 00:05:48.093 12:08:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:48.093 12:08:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.093 12:08:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.093 12:08:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.093 12:08:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.093 ************************************ 00:05:48.093 START TEST cpu_locks 00:05:48.093 ************************************ 00:05:48.093 12:08:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.351 * Looking for test storage... 00:05:48.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:48.351 12:08:54 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.351 12:08:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.351 12:08:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.351 12:08:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.351 12:08:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:48.352 12:08:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.352 12:08:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.352 12:08:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.352 12:08:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:48.352 12:08:55 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.352 12:08:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.352 --rc genhtml_branch_coverage=1 00:05:48.352 --rc genhtml_function_coverage=1 00:05:48.352 --rc genhtml_legend=1 00:05:48.352 --rc geninfo_all_blocks=1 00:05:48.352 --rc geninfo_unexecuted_blocks=1 00:05:48.352 00:05:48.352 ' 00:05:48.352 12:08:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.352 --rc genhtml_branch_coverage=1 00:05:48.352 --rc genhtml_function_coverage=1 00:05:48.352 --rc genhtml_legend=1 00:05:48.352 --rc geninfo_all_blocks=1 00:05:48.352 --rc geninfo_unexecuted_blocks=1 00:05:48.352 00:05:48.352 ' 00:05:48.352 12:08:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.352 --rc genhtml_branch_coverage=1 00:05:48.352 --rc genhtml_function_coverage=1 00:05:48.352 --rc genhtml_legend=1 00:05:48.352 --rc geninfo_all_blocks=1 00:05:48.352 --rc geninfo_unexecuted_blocks=1 00:05:48.352 00:05:48.352 ' 00:05:48.352 12:08:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.352 --rc genhtml_branch_coverage=1 00:05:48.352 --rc genhtml_function_coverage=1 00:05:48.352 --rc genhtml_legend=1 00:05:48.352 --rc geninfo_all_blocks=1 00:05:48.352 --rc geninfo_unexecuted_blocks=1 00:05:48.352 00:05:48.352 ' 00:05:48.352 12:08:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:48.352 12:08:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:48.352 12:08:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:48.352 12:08:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:48.352 12:08:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.352 12:08:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.352 12:08:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.352 ************************************ 00:05:48.352 START TEST default_locks 00:05:48.352 ************************************ 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3457315 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3457315 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3457315 ']' 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.352 12:08:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.352 [2024-12-10 12:08:55.143347] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:48.352 [2024-12-10 12:08:55.143435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457315 ] 00:05:48.609 [2024-12-10 12:08:55.257721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.609 [2024-12-10 12:08:55.369247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.542 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.542 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:49.542 12:08:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3457315 00:05:49.542 12:08:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3457315 00:05:49.542 12:08:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.800 lslocks: write error 00:05:49.800 12:08:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3457315 00:05:49.800 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3457315 ']' 00:05:49.800 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3457315 00:05:49.800 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.800 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.800 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3457315 00:05:50.058 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.058 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.058 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3457315' 00:05:50.058 killing process with pid 3457315 00:05:50.058 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3457315 00:05:50.058 12:08:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3457315 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3457315 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3457315 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3457315 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3457315 ']' 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3457315) - No such process 00:05:52.587 ERROR: process (pid: 3457315) is no longer running 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.587 00:05:52.587 real 0m3.905s 00:05:52.587 user 0m3.896s 00:05:52.587 sys 0m0.659s 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.587 12:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.587 ************************************ 00:05:52.587 END TEST default_locks 00:05:52.587 ************************************ 00:05:52.587 12:08:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:52.587 12:08:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.587 12:08:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.587 12:08:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.587 ************************************ 00:05:52.587 START TEST default_locks_via_rpc 00:05:52.587 ************************************ 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3458021 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3458021 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3458021 ']' 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.587 12:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.587 [2024-12-10 12:08:59.111838] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:52.587 [2024-12-10 12:08:59.111944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458021 ] 00:05:52.587 [2024-12-10 12:08:59.224768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.587 [2024-12-10 12:08:59.329295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3458021 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3458021 00:05:53.522 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.781 12:09:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3458021 00:05:53.781 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3458021 ']' 00:05:53.781 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3458021 00:05:53.781 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:53.781 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.781 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3458021 00:05:54.039 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.039 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.039 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3458021' 00:05:54.039 killing process with pid 3458021 00:05:54.039 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3458021 00:05:54.039 12:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3458021 00:05:56.568 00:05:56.568 real 0m3.978s 00:05:56.568 user 0m3.969s 00:05:56.568 sys 0m0.650s 00:05:56.568 12:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.568 12:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.568 ************************************ 00:05:56.568 END TEST default_locks_via_rpc 00:05:56.568 ************************************ 00:05:56.568 12:09:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.568 12:09:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.568 12:09:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.568 12:09:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.568 ************************************ 00:05:56.568 START TEST non_locking_app_on_locked_coremask 00:05:56.568 ************************************ 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3458849 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3458849 /var/tmp/spdk.sock 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3458849 ']' 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.568 12:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.568 [2024-12-10 12:09:03.145871] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:56.568 [2024-12-10 12:09:03.145962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458849 ] 00:05:56.568 [2024-12-10 12:09:03.259987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.568 [2024-12-10 12:09:03.369439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3459073 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3459073 /var/tmp/spdk2.sock 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3459073 ']' 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.501 12:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.501 [2024-12-10 12:09:04.288114] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:57.501 [2024-12-10 12:09:04.288218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459073 ] 00:05:57.759 [2024-12-10 12:09:04.445647] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.759 [2024-12-10 12:09:04.445693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.017 [2024-12-10 12:09:04.662561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3458849 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3458849 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.548 lslocks: write error 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3458849 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3458849 ']' 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3458849 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.548 12:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3458849 00:06:00.548 12:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.548 12:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.548 12:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3458849' 00:06:00.548 killing process with pid 3458849 00:06:00.548 12:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3458849 00:06:00.548 12:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3458849 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3459073 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3459073 ']' 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3459073 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3459073 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3459073' 00:06:05.803 killing process with pid 3459073 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3459073 00:06:05.803 12:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3459073 00:06:07.699 00:06:07.699 real 0m10.961s 00:06:07.699 user 0m11.197s 00:06:07.700 sys 0m1.075s 00:06:07.700 12:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.700 12:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.700 ************************************ 00:06:07.700 END TEST non_locking_app_on_locked_coremask 00:06:07.700 ************************************ 00:06:07.700 12:09:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:07.700 12:09:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.700 12:09:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.700 12:09:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.700 ************************************ 00:06:07.700 START TEST locking_app_on_unlocked_coremask 00:06:07.700 ************************************ 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3461126 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3461126 /var/tmp/spdk.sock 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3461126 ']' 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.700 12:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.700 [2024-12-10 12:09:14.170723] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:07.700 [2024-12-10 12:09:14.170815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461126 ] 00:06:07.700 [2024-12-10 12:09:14.283451] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.700 [2024-12-10 12:09:14.283492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.700 [2024-12-10 12:09:14.388757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3461292 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3461292 /var/tmp/spdk2.sock 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3461292 ']' 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.631 12:09:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.631 [2024-12-10 12:09:15.283966] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:08.631 [2024-12-10 12:09:15.284072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461292 ] 00:06:08.631 [2024-12-10 12:09:15.441707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.888 [2024-12-10 12:09:15.650411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.412 12:09:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.412 12:09:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.412 12:09:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3461292 00:06:11.412 12:09:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3461292 00:06:11.412 12:09:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.669 lslocks: write error 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3461126 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3461126 ']' 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3461126 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461126 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461126' 00:06:11.669 killing process with pid 3461126 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3461126 00:06:11.669 12:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3461126 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3461292 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3461292 ']' 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3461292 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461292 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461292' 00:06:16.925 killing process with pid 3461292 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3461292 00:06:16.925 12:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3461292 00:06:18.821 00:06:18.821 real 0m11.299s 00:06:18.821 user 0m11.561s 00:06:18.821 sys 0m1.224s 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.821 ************************************ 00:06:18.821 END TEST locking_app_on_unlocked_coremask 00:06:18.821 ************************************ 00:06:18.821 12:09:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.821 12:09:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.821 12:09:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.821 12:09:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.821 ************************************ 00:06:18.821 START TEST locking_app_on_locked_coremask 00:06:18.821 ************************************ 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3463111 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3463111 /var/tmp/spdk.sock 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3463111 ']' 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.821 12:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.821 [2024-12-10 12:09:25.542274] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:18.821 [2024-12-10 12:09:25.542362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463111 ] 00:06:19.079 [2024-12-10 12:09:25.655403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.079 [2024-12-10 12:09:25.752756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3463335 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3463335 /var/tmp/spdk2.sock 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3463335 /var/tmp/spdk2.sock 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3463335 /var/tmp/spdk2.sock 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3463335 ']' 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.010 12:09:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.010 [2024-12-10 12:09:26.627069] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:20.010 [2024-12-10 12:09:26.627154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463335 ] 00:06:20.010 [2024-12-10 12:09:26.784943] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3463111 has claimed it. 00:06:20.010 [2024-12-10 12:09:26.784995] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3463335) - No such process 00:06:20.575 ERROR: process (pid: 3463335) is no longer running 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3463111 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3463111 00:06:20.575 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.138 lslocks: write error 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3463111 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3463111 ']' 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3463111 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3463111 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.138 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3463111' 00:06:21.138 killing process with pid 3463111 00:06:21.139 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3463111 00:06:21.139 12:09:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3463111 00:06:23.662 00:06:23.662 real 0m4.641s 00:06:23.662 user 0m4.763s 00:06:23.662 sys 0m0.852s 00:06:23.662 12:09:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.662 12:09:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.662 ************************************ 00:06:23.662 END TEST locking_app_on_locked_coremask 00:06:23.662 ************************************ 00:06:23.662 12:09:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.662 12:09:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.662 12:09:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.662 12:09:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.662 ************************************ 00:06:23.662 START TEST locking_overlapped_coremask 00:06:23.662 ************************************ 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3463832 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3463832 /var/tmp/spdk.sock 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3463832 ']' 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.662 12:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.662 [2024-12-10 12:09:30.253905] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:23.662 [2024-12-10 12:09:30.253998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463832 ] 00:06:23.662 [2024-12-10 12:09:30.366509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.662 [2024-12-10 12:09:30.478099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.662 [2024-12-10 12:09:30.478177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.662 [2024-12-10 12:09:30.478180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3464058 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3464058 /var/tmp/spdk2.sock 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3464058 /var/tmp/spdk2.sock 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3464058 /var/tmp/spdk2.sock 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3464058 ']' 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.596 12:09:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 [2024-12-10 12:09:31.405290] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:24.596 [2024-12-10 12:09:31.405376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464058 ] 00:06:24.854 [2024-12-10 12:09:31.561103] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3463832 has claimed it. 00:06:24.854 [2024-12-10 12:09:31.561157] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3464058) - No such process 00:06:25.418 ERROR: process (pid: 3464058) is no longer running 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.418 12:09:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3463832 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3463832 ']' 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3463832 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3463832 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3463832' 00:06:25.419 killing process with pid 3463832 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3463832 00:06:25.419 12:09:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3463832 00:06:27.948 00:06:27.948 real 0m4.326s 00:06:27.948 user 0m11.951s 00:06:27.948 sys 0m0.609s 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.948 ************************************ 00:06:27.948 END TEST locking_overlapped_coremask 00:06:27.948 ************************************ 00:06:27.948 12:09:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.948 12:09:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.948 12:09:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.948 12:09:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.948 ************************************ 00:06:27.948 START TEST locking_overlapped_coremask_via_rpc 00:06:27.948 ************************************ 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3464720 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3464720 /var/tmp/spdk.sock 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3464720 ']' 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.948 12:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.948 [2024-12-10 12:09:34.644603] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:27.948 [2024-12-10 12:09:34.644692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464720 ] 00:06:27.948 [2024-12-10 12:09:34.754521] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.948 [2024-12-10 12:09:34.754556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.206 [2024-12-10 12:09:34.864807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.206 [2024-12-10 12:09:34.864875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.206 [2024-12-10 12:09:34.864882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3464816 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3464816 /var/tmp/spdk2.sock 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3464816 ']' 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.144 12:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.144 [2024-12-10 12:09:35.789560] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:29.144 [2024-12-10 12:09:35.789657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464816 ] 00:06:29.144 [2024-12-10 12:09:35.951807] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.144 [2024-12-10 12:09:35.951854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.402 [2024-12-10 12:09:36.177655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.402 [2024-12-10 12:09:36.181219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.402 [2024-12-10 12:09:36.181245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.929 [2024-12-10 12:09:38.299286] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3464720 has claimed it. 00:06:31.929 request: 00:06:31.929 { 00:06:31.929 "method": "framework_enable_cpumask_locks", 00:06:31.929 "req_id": 1 00:06:31.929 } 00:06:31.929 Got JSON-RPC error response 00:06:31.929 response: 00:06:31.929 { 00:06:31.929 "code": -32603, 00:06:31.929 "message": "Failed to claim CPU core: 2" 00:06:31.929 } 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3464720 /var/tmp/spdk.sock 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3464720 ']' 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3464816 /var/tmp/spdk2.sock 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3464816 ']' 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.929 00:06:31.929 real 0m4.187s 00:06:31.929 user 0m1.159s 00:06:31.929 sys 0m0.198s 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.929 12:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.929 ************************************ 00:06:31.929 END TEST locking_overlapped_coremask_via_rpc 00:06:31.929 ************************************ 00:06:32.187 12:09:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.187 12:09:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3464720 ]] 00:06:32.187 12:09:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3464720 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3464720 ']' 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3464720 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3464720 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3464720' 00:06:32.187 killing process with pid 3464720 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3464720 00:06:32.187 12:09:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3464720 00:06:34.715 12:09:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3464816 ]] 00:06:34.715 12:09:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3464816 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3464816 ']' 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3464816 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3464816 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3464816' 00:06:34.715 killing process with pid 3464816 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3464816 00:06:34.715 12:09:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3464816 00:06:37.262 12:09:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.262 12:09:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:37.262 12:09:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3464720 ]] 00:06:37.262 12:09:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3464720 00:06:37.262 12:09:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3464720 ']' 00:06:37.262 12:09:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3464720 00:06:37.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3464720) - No such process 00:06:37.262 12:09:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3464720 is not found' 00:06:37.262 Process with pid 3464720 is not found 00:06:37.262 12:09:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3464816 ]] 00:06:37.262 12:09:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3464816 00:06:37.262 12:09:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3464816 ']' 00:06:37.262 12:09:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3464816 00:06:37.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3464816) - No such process 00:06:37.262 12:09:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3464816 is not found' 00:06:37.262 Process with pid 3464816 is not found 00:06:37.262 12:09:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.262 00:06:37.262 real 0m48.914s 00:06:37.262 user 1m24.581s 00:06:37.262 sys 0m6.440s 00:06:37.262 12:09:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.262 12:09:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.262 ************************************ 00:06:37.262 END TEST cpu_locks 00:06:37.262 ************************************ 00:06:37.262 00:06:37.262 real 1m18.024s 00:06:37.262 user 2m20.948s 00:06:37.262 sys 0m10.204s 00:06:37.262 12:09:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.262 12:09:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.262 ************************************ 00:06:37.262 END TEST event 00:06:37.262 ************************************ 00:06:37.262 12:09:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:37.262 12:09:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.262 12:09:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.262 12:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:37.262 ************************************ 00:06:37.262 START TEST thread 00:06:37.262 ************************************ 00:06:37.262 12:09:43 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:37.262 * Looking for test storage... 00:06:37.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:37.262 12:09:43 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.262 12:09:43 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.262 12:09:43 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.262 12:09:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.262 12:09:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.262 12:09:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.262 12:09:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.262 12:09:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.262 12:09:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.262 12:09:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.262 12:09:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.262 12:09:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.262 12:09:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.262 12:09:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.262 12:09:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:37.262 12:09:44 thread -- scripts/common.sh@345 -- # : 1 00:06:37.262 12:09:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.262 12:09:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.262 12:09:44 thread -- scripts/common.sh@365 -- # decimal 1 00:06:37.262 12:09:44 thread -- scripts/common.sh@353 -- # local d=1 00:06:37.262 12:09:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.262 12:09:44 thread -- scripts/common.sh@355 -- # echo 1 00:06:37.262 12:09:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.262 12:09:44 thread -- scripts/common.sh@366 -- # decimal 2 00:06:37.262 12:09:44 thread -- scripts/common.sh@353 -- # local d=2 00:06:37.262 12:09:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.262 12:09:44 thread -- scripts/common.sh@355 -- # echo 2 00:06:37.262 12:09:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.262 12:09:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.262 12:09:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.262 12:09:44 thread -- scripts/common.sh@368 -- # return 0 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.262 --rc genhtml_branch_coverage=1 00:06:37.262 --rc genhtml_function_coverage=1 00:06:37.262 --rc genhtml_legend=1 00:06:37.262 --rc geninfo_all_blocks=1 00:06:37.262 --rc geninfo_unexecuted_blocks=1 00:06:37.262 00:06:37.262 ' 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.262 --rc genhtml_branch_coverage=1 00:06:37.262 --rc genhtml_function_coverage=1 00:06:37.262 --rc genhtml_legend=1 00:06:37.262 --rc geninfo_all_blocks=1 00:06:37.262 --rc geninfo_unexecuted_blocks=1 00:06:37.262 00:06:37.262 ' 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.262 --rc genhtml_branch_coverage=1 00:06:37.262 --rc genhtml_function_coverage=1 00:06:37.262 --rc genhtml_legend=1 00:06:37.262 --rc geninfo_all_blocks=1 00:06:37.262 --rc geninfo_unexecuted_blocks=1 00:06:37.262 00:06:37.262 ' 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.262 --rc genhtml_branch_coverage=1 00:06:37.262 --rc genhtml_function_coverage=1 00:06:37.262 --rc genhtml_legend=1 00:06:37.262 --rc geninfo_all_blocks=1 00:06:37.262 --rc geninfo_unexecuted_blocks=1 00:06:37.262 00:06:37.262 ' 00:06:37.262 12:09:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.262 12:09:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.262 ************************************ 00:06:37.262 START TEST thread_poller_perf 00:06:37.262 ************************************ 00:06:37.262 12:09:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.520 [2024-12-10 12:09:44.093611] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:37.520 [2024-12-10 12:09:44.093690] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466429 ] 00:06:37.520 [2024-12-10 12:09:44.202599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.520 [2024-12-10 12:09:44.311140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.520 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:38.894 [2024-12-10T11:09:45.720Z] ====================================== 00:06:38.894 [2024-12-10T11:09:45.720Z] busy:2109459930 (cyc) 00:06:38.894 [2024-12-10T11:09:45.720Z] total_run_count: 412000 00:06:38.894 [2024-12-10T11:09:45.720Z] tsc_hz: 2100000000 (cyc) 00:06:38.894 [2024-12-10T11:09:45.720Z] ====================================== 00:06:38.894 [2024-12-10T11:09:45.720Z] poller_cost: 5120 (cyc), 2438 (nsec) 00:06:38.894 00:06:38.894 real 0m1.473s 00:06:38.894 user 0m1.353s 00:06:38.894 sys 0m0.114s 00:06:38.894 12:09:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.894 12:09:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.894 ************************************ 00:06:38.894 END TEST thread_poller_perf 00:06:38.894 ************************************ 00:06:38.894 12:09:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.894 12:09:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:38.894 12:09:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.894 12:09:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.894 ************************************ 00:06:38.894 START TEST thread_poller_perf 00:06:38.894 ************************************ 00:06:38.894 12:09:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.894 [2024-12-10 12:09:45.629179] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:38.894 [2024-12-10 12:09:45.629271] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466676 ] 00:06:39.155 [2024-12-10 12:09:45.737021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.155 [2024-12-10 12:09:45.842831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.155 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:40.260 [2024-12-10T11:09:47.086Z] ====================================== 00:06:40.260 [2024-12-10T11:09:47.086Z] busy:2102295592 (cyc) 00:06:40.260 [2024-12-10T11:09:47.086Z] total_run_count: 4903000 00:06:40.260 [2024-12-10T11:09:47.086Z] tsc_hz: 2100000000 (cyc) 00:06:40.260 [2024-12-10T11:09:47.086Z] ====================================== 00:06:40.260 [2024-12-10T11:09:47.086Z] poller_cost: 428 (cyc), 203 (nsec) 00:06:40.260 00:06:40.260 real 0m1.461s 00:06:40.260 user 0m1.336s 00:06:40.260 sys 0m0.119s 00:06:40.260 12:09:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.261 12:09:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.261 ************************************ 00:06:40.261 END TEST thread_poller_perf 00:06:40.261 ************************************ 00:06:40.534 12:09:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:40.534 00:06:40.534 real 0m3.224s 00:06:40.534 user 0m2.835s 00:06:40.534 sys 0m0.397s 00:06:40.534 12:09:47 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.534 12:09:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.534 ************************************ 00:06:40.534 END TEST thread 00:06:40.534 ************************************ 00:06:40.534 12:09:47 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:40.534 12:09:47 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.534 12:09:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.534 12:09:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.534 12:09:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.534 ************************************ 00:06:40.534 START TEST app_cmdline 00:06:40.534 ************************************ 00:06:40.534 12:09:47 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.534 * Looking for test storage... 00:06:40.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:40.534 12:09:47 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.534 12:09:47 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.534 12:09:47 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.534 12:09:47 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.534 12:09:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:40.534 12:09:47 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.534 12:09:47 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.534 --rc genhtml_branch_coverage=1 00:06:40.534 --rc genhtml_function_coverage=1 00:06:40.534 --rc genhtml_legend=1 00:06:40.534 --rc geninfo_all_blocks=1 00:06:40.534 --rc geninfo_unexecuted_blocks=1 00:06:40.534 00:06:40.534 ' 00:06:40.534 12:09:47 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.535 --rc genhtml_branch_coverage=1 00:06:40.535 --rc genhtml_function_coverage=1 00:06:40.535 --rc genhtml_legend=1 00:06:40.535 --rc geninfo_all_blocks=1 00:06:40.535 --rc geninfo_unexecuted_blocks=1 00:06:40.535 00:06:40.535 ' 00:06:40.535 12:09:47 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.535 --rc genhtml_branch_coverage=1 00:06:40.535 --rc genhtml_function_coverage=1 00:06:40.535 --rc genhtml_legend=1 00:06:40.535 --rc geninfo_all_blocks=1 00:06:40.535 --rc geninfo_unexecuted_blocks=1 00:06:40.535 00:06:40.535 ' 00:06:40.535 12:09:47 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.535 --rc genhtml_branch_coverage=1 00:06:40.535 --rc genhtml_function_coverage=1 00:06:40.535 --rc genhtml_legend=1 00:06:40.535 --rc geninfo_all_blocks=1 00:06:40.535 --rc geninfo_unexecuted_blocks=1 00:06:40.535 00:06:40.535 ' 00:06:40.535 12:09:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:40.535 12:09:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3466984 00:06:40.535 12:09:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:40.535 12:09:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3466984 00:06:40.535 12:09:47 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3466984 ']' 00:06:40.535 12:09:47 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.535 12:09:47 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.535 12:09:47 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.535 12:09:47 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.535 12:09:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.793 [2024-12-10 12:09:47.406673] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:40.793 [2024-12-10 12:09:47.406763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466984 ] 00:06:40.793 [2024-12-10 12:09:47.515845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.050 [2024-12-10 12:09:47.621281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:41.984 { 00:06:41.984 "version": "SPDK v25.01-pre git sha1 52a413487", 00:06:41.984 "fields": { 00:06:41.984 "major": 25, 00:06:41.984 "minor": 1, 00:06:41.984 "patch": 0, 00:06:41.984 "suffix": "-pre", 00:06:41.984 "commit": "52a413487" 00:06:41.984 } 00:06:41.984 } 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.984 12:09:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:41.984 12:09:48 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.242 request: 00:06:42.242 { 00:06:42.242 "method": "env_dpdk_get_mem_stats", 00:06:42.242 "req_id": 1 00:06:42.242 } 00:06:42.242 Got JSON-RPC error response 00:06:42.242 response: 00:06:42.242 { 00:06:42.242 "code": -32601, 00:06:42.242 "message": "Method not found" 00:06:42.242 } 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.242 12:09:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3466984 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3466984 ']' 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3466984 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466984 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466984' 00:06:42.242 killing process with pid 3466984 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@973 -- # kill 3466984 00:06:42.242 12:09:48 app_cmdline -- common/autotest_common.sh@978 -- # wait 3466984 00:06:44.775 00:06:44.775 real 0m4.105s 00:06:44.775 user 0m4.358s 00:06:44.775 sys 0m0.561s 00:06:44.775 12:09:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.775 12:09:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.775 ************************************ 00:06:44.775 END TEST app_cmdline 00:06:44.775 ************************************ 00:06:44.775 12:09:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.775 12:09:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.775 12:09:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.775 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.775 ************************************ 00:06:44.775 START TEST version 00:06:44.775 ************************************ 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.775 * Looking for test storage... 00:06:44.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.775 12:09:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.775 12:09:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.775 12:09:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.775 12:09:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.775 12:09:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.775 12:09:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.775 12:09:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.775 12:09:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.775 12:09:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.775 12:09:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.775 12:09:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.775 12:09:51 version -- scripts/common.sh@344 -- # case "$op" in 00:06:44.775 12:09:51 version -- scripts/common.sh@345 -- # : 1 00:06:44.775 12:09:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.775 12:09:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.775 12:09:51 version -- scripts/common.sh@365 -- # decimal 1 00:06:44.775 12:09:51 version -- scripts/common.sh@353 -- # local d=1 00:06:44.775 12:09:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.775 12:09:51 version -- scripts/common.sh@355 -- # echo 1 00:06:44.775 12:09:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.775 12:09:51 version -- scripts/common.sh@366 -- # decimal 2 00:06:44.775 12:09:51 version -- scripts/common.sh@353 -- # local d=2 00:06:44.775 12:09:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.775 12:09:51 version -- scripts/common.sh@355 -- # echo 2 00:06:44.775 12:09:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.775 12:09:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.775 12:09:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.775 12:09:51 version -- scripts/common.sh@368 -- # return 0 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.775 --rc genhtml_branch_coverage=1 00:06:44.775 --rc genhtml_function_coverage=1 00:06:44.775 --rc genhtml_legend=1 00:06:44.775 --rc geninfo_all_blocks=1 00:06:44.775 --rc geninfo_unexecuted_blocks=1 00:06:44.775 00:06:44.775 ' 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.775 --rc genhtml_branch_coverage=1 00:06:44.775 --rc genhtml_function_coverage=1 00:06:44.775 --rc genhtml_legend=1 00:06:44.775 --rc geninfo_all_blocks=1 00:06:44.775 --rc geninfo_unexecuted_blocks=1 00:06:44.775 00:06:44.775 ' 00:06:44.775 12:09:51 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.775 --rc genhtml_branch_coverage=1 00:06:44.775 --rc genhtml_function_coverage=1 00:06:44.776 --rc genhtml_legend=1 00:06:44.776 --rc geninfo_all_blocks=1 00:06:44.776 --rc geninfo_unexecuted_blocks=1 00:06:44.776 00:06:44.776 ' 00:06:44.776 12:09:51 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.776 --rc genhtml_branch_coverage=1 00:06:44.776 --rc genhtml_function_coverage=1 00:06:44.776 --rc genhtml_legend=1 00:06:44.776 --rc geninfo_all_blocks=1 00:06:44.776 --rc geninfo_unexecuted_blocks=1 00:06:44.776 00:06:44.776 ' 00:06:44.776 12:09:51 version -- app/version.sh@17 -- # get_header_version major 00:06:44.776 12:09:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.776 12:09:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.776 12:09:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.776 12:09:51 version -- app/version.sh@17 -- # major=25 00:06:44.776 12:09:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:44.776 12:09:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.776 12:09:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.776 12:09:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.776 12:09:51 version -- app/version.sh@18 -- # minor=1 00:06:44.776 12:09:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:44.776 12:09:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.776 12:09:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.776 12:09:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.776 12:09:51 version -- app/version.sh@19 -- # patch=0 00:06:44.776 12:09:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:44.776 12:09:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.776 12:09:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.776 12:09:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.776 12:09:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:44.776 12:09:51 version -- app/version.sh@22 -- # version=25.1 00:06:44.776 12:09:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.776 12:09:51 version -- app/version.sh@28 -- # version=25.1rc0 00:06:44.776 12:09:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:44.776 12:09:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.776 12:09:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:44.776 12:09:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:44.776 00:06:44.776 real 0m0.242s 00:06:44.776 user 0m0.156s 00:06:44.776 sys 0m0.126s 00:06:44.776 12:09:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.776 12:09:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:44.776 ************************************ 00:06:44.776 END TEST version 00:06:44.776 ************************************ 00:06:45.034 12:09:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:45.034 12:09:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:45.034 12:09:51 -- spdk/autotest.sh@194 -- # uname -s 00:06:45.034 12:09:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:45.034 12:09:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:45.034 12:09:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:45.034 12:09:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:45.034 12:09:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:45.034 12:09:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:45.034 12:09:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.034 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.034 12:09:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:45.034 12:09:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:45.034 12:09:51 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:45.034 12:09:51 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:45.034 12:09:51 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:45.034 12:09:51 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:45.034 12:09:51 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:45.034 12:09:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.034 12:09:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.034 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.034 ************************************ 00:06:45.034 START TEST nvmf_tcp 00:06:45.034 ************************************ 00:06:45.034 12:09:51 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:45.034 * Looking for test storage... 00:06:45.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:45.034 12:09:51 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.034 12:09:51 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.034 12:09:51 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.034 12:09:51 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.034 12:09:51 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:45.034 12:09:51 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.034 12:09:51 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.035 --rc genhtml_branch_coverage=1 00:06:45.035 --rc genhtml_function_coverage=1 00:06:45.035 --rc genhtml_legend=1 00:06:45.035 --rc geninfo_all_blocks=1 00:06:45.035 --rc geninfo_unexecuted_blocks=1 00:06:45.035 00:06:45.035 ' 00:06:45.035 12:09:51 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.035 --rc genhtml_branch_coverage=1 00:06:45.035 --rc genhtml_function_coverage=1 00:06:45.035 --rc genhtml_legend=1 00:06:45.035 --rc geninfo_all_blocks=1 00:06:45.035 --rc geninfo_unexecuted_blocks=1 00:06:45.035 00:06:45.035 ' 00:06:45.035 12:09:51 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.035 --rc genhtml_branch_coverage=1 00:06:45.035 --rc genhtml_function_coverage=1 00:06:45.035 --rc genhtml_legend=1 00:06:45.035 --rc geninfo_all_blocks=1 00:06:45.035 --rc geninfo_unexecuted_blocks=1 00:06:45.035 00:06:45.035 ' 00:06:45.035 12:09:51 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.035 --rc genhtml_branch_coverage=1 00:06:45.035 --rc genhtml_function_coverage=1 00:06:45.035 --rc genhtml_legend=1 00:06:45.035 --rc geninfo_all_blocks=1 00:06:45.035 --rc geninfo_unexecuted_blocks=1 00:06:45.035 00:06:45.035 ' 00:06:45.035 12:09:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:45.035 12:09:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:45.035 12:09:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:45.035 12:09:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.035 12:09:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.035 12:09:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.293 ************************************ 00:06:45.293 START TEST nvmf_target_core 00:06:45.293 ************************************ 00:06:45.293 12:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:45.293 * Looking for test storage... 00:06:45.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:45.293 12:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.293 12:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.293 12:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.293 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.294 --rc genhtml_branch_coverage=1 00:06:45.294 --rc genhtml_function_coverage=1 00:06:45.294 --rc genhtml_legend=1 00:06:45.294 --rc geninfo_all_blocks=1 00:06:45.294 --rc geninfo_unexecuted_blocks=1 00:06:45.294 00:06:45.294 ' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.294 --rc genhtml_branch_coverage=1 00:06:45.294 --rc genhtml_function_coverage=1 00:06:45.294 --rc genhtml_legend=1 00:06:45.294 --rc geninfo_all_blocks=1 00:06:45.294 --rc geninfo_unexecuted_blocks=1 00:06:45.294 00:06:45.294 ' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.294 --rc genhtml_branch_coverage=1 00:06:45.294 --rc genhtml_function_coverage=1 00:06:45.294 --rc genhtml_legend=1 00:06:45.294 --rc geninfo_all_blocks=1 00:06:45.294 --rc geninfo_unexecuted_blocks=1 00:06:45.294 00:06:45.294 ' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.294 --rc genhtml_branch_coverage=1 00:06:45.294 --rc genhtml_function_coverage=1 00:06:45.294 --rc genhtml_legend=1 00:06:45.294 --rc geninfo_all_blocks=1 00:06:45.294 --rc geninfo_unexecuted_blocks=1 00:06:45.294 00:06:45.294 ' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.294 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.553 ************************************ 00:06:45.553 START TEST nvmf_abort 00:06:45.553 ************************************ 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:45.553 * Looking for test storage... 00:06:45.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.553 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.554 --rc genhtml_branch_coverage=1 00:06:45.554 --rc genhtml_function_coverage=1 00:06:45.554 --rc genhtml_legend=1 00:06:45.554 --rc geninfo_all_blocks=1 00:06:45.554 --rc geninfo_unexecuted_blocks=1 00:06:45.554 00:06:45.554 ' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.554 --rc genhtml_branch_coverage=1 00:06:45.554 --rc genhtml_function_coverage=1 00:06:45.554 --rc genhtml_legend=1 00:06:45.554 --rc geninfo_all_blocks=1 00:06:45.554 --rc geninfo_unexecuted_blocks=1 00:06:45.554 00:06:45.554 ' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.554 --rc genhtml_branch_coverage=1 00:06:45.554 --rc genhtml_function_coverage=1 00:06:45.554 --rc genhtml_legend=1 00:06:45.554 --rc geninfo_all_blocks=1 00:06:45.554 --rc geninfo_unexecuted_blocks=1 00:06:45.554 00:06:45.554 ' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.554 --rc genhtml_branch_coverage=1 00:06:45.554 --rc genhtml_function_coverage=1 00:06:45.554 --rc genhtml_legend=1 00:06:45.554 --rc geninfo_all_blocks=1 00:06:45.554 --rc geninfo_unexecuted_blocks=1 00:06:45.554 00:06:45.554 ' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.554 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:50.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:50.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:50.822 Found net devices under 0000:af:00.0: cvl_0_0 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:50.822 Found net devices under 0000:af:00.1: cvl_0_1 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.822 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:06:51.082 00:06:51.082 --- 10.0.0.2 ping statistics --- 00:06:51.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.082 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:06:51.082 00:06:51.082 --- 10.0.0.1 ping statistics --- 00:06:51.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.082 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3471041 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3471041 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3471041 ']' 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.082 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.082 [2024-12-10 12:09:57.884915] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:51.082 [2024-12-10 12:09:57.885003] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.341 [2024-12-10 12:09:58.002903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.341 [2024-12-10 12:09:58.113879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.341 [2024-12-10 12:09:58.113926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.341 [2024-12-10 12:09:58.113937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.341 [2024-12-10 12:09:58.113947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.341 [2024-12-10 12:09:58.113955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.341 [2024-12-10 12:09:58.116119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.341 [2024-12-10 12:09:58.116194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.341 [2024-12-10 12:09:58.116203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.907 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.907 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:51.907 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.907 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.907 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 [2024-12-10 12:09:58.747286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 Malloc0 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 Delay0 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 [2024-12-10 12:09:58.888195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.166 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:52.424 [2024-12-10 12:09:59.005830] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:54.324 Initializing NVMe Controllers 00:06:54.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:54.324 controller IO queue size 128 less than required 00:06:54.324 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:54.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:54.324 Initialization complete. Launching workers. 00:06:54.324 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34042 00:06:54.324 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34103, failed to submit 66 00:06:54.324 success 34042, unsuccessful 61, failed 0 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:54.582 rmmod nvme_tcp 00:06:54.582 rmmod nvme_fabrics 00:06:54.582 rmmod nvme_keyring 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3471041 ']' 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3471041 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3471041 ']' 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3471041 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3471041 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3471041' 00:06:54.582 killing process with pid 3471041 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3471041 00:06:54.582 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3471041 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.956 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:58.488 00:06:58.488 real 0m12.574s 00:06:58.488 user 0m16.438s 00:06:58.488 sys 0m5.147s 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:58.488 ************************************ 00:06:58.488 END TEST nvmf_abort 00:06:58.488 ************************************ 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:58.488 ************************************ 00:06:58.488 START TEST nvmf_ns_hotplug_stress 00:06:58.488 ************************************ 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:58.488 * Looking for test storage... 00:06:58.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:58.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.488 --rc genhtml_branch_coverage=1 00:06:58.488 --rc genhtml_function_coverage=1 00:06:58.488 --rc genhtml_legend=1 00:06:58.488 --rc geninfo_all_blocks=1 00:06:58.488 --rc geninfo_unexecuted_blocks=1 00:06:58.488 00:06:58.488 ' 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:58.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.488 --rc genhtml_branch_coverage=1 00:06:58.488 --rc genhtml_function_coverage=1 00:06:58.488 --rc genhtml_legend=1 00:06:58.488 --rc geninfo_all_blocks=1 00:06:58.488 --rc geninfo_unexecuted_blocks=1 00:06:58.488 00:06:58.488 ' 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:58.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.488 --rc genhtml_branch_coverage=1 00:06:58.488 --rc genhtml_function_coverage=1 00:06:58.488 --rc genhtml_legend=1 00:06:58.488 --rc geninfo_all_blocks=1 00:06:58.488 --rc geninfo_unexecuted_blocks=1 00:06:58.488 00:06:58.488 ' 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:58.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.488 --rc genhtml_branch_coverage=1 00:06:58.488 --rc genhtml_function_coverage=1 00:06:58.488 --rc genhtml_legend=1 00:06:58.488 --rc geninfo_all_blocks=1 00:06:58.488 --rc geninfo_unexecuted_blocks=1 00:06:58.488 00:06:58.488 ' 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.488 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:58.489 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:03.759 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:03.759 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.759 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:03.760 Found net devices under 0000:af:00.0: cvl_0_0 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:03.760 Found net devices under 0000:af:00.1: cvl_0_1 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:03.760 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:07:04.019 00:07:04.019 --- 10.0.0.2 ping statistics --- 00:07:04.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.019 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:07:04.019 00:07:04.019 --- 10.0.0.1 ping statistics --- 00:07:04.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.019 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3475234 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3475234 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3475234 ']' 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.019 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:04.019 [2024-12-10 12:10:10.774463] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:04.019 [2024-12-10 12:10:10.774550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.277 [2024-12-10 12:10:10.893974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.277 [2024-12-10 12:10:11.002266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.278 [2024-12-10 12:10:11.002316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.278 [2024-12-10 12:10:11.002327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.278 [2024-12-10 12:10:11.002355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.278 [2024-12-10 12:10:11.002363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.278 [2024-12-10 12:10:11.004793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.278 [2024-12-10 12:10:11.004855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.278 [2024-12-10 12:10:11.004865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.845 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.845 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:04.845 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:04.845 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.845 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:04.845 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.845 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:04.845 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:05.104 [2024-12-10 12:10:11.793719] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.104 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:05.363 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.621 [2024-12-10 12:10:12.207095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.621 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.879 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:05.879 Malloc0 00:07:05.879 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:06.138 Delay0 00:07:06.138 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.396 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:06.655 NULL1 00:07:06.655 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:06.655 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3475738 00:07:06.655 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:06.655 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:06.655 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.914 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.173 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:07.173 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:07.432 true 00:07:07.432 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:07.432 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.690 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.690 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:07.690 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:07.949 true 00:07:07.949 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:07.949 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.207 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.465 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:08.465 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:08.724 true 00:07:08.724 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:08.724 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.983 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.983 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:08.983 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:09.242 true 00:07:09.242 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:09.242 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.501 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.759 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:09.759 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:10.017 true 00:07:10.017 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:10.017 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.276 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.276 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:10.276 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:10.535 true 00:07:10.535 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:10.535 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.794 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.053 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:11.053 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:11.053 true 00:07:11.311 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:11.312 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.312 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.570 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:11.570 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:11.829 true 00:07:11.829 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:11.829 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.088 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.347 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:12.347 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:12.347 true 00:07:12.347 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:12.347 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.606 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.865 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:12.865 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:13.124 true 00:07:13.124 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:13.124 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.382 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.382 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:13.382 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:13.641 true 00:07:13.641 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:13.641 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.900 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.159 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:14.159 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:14.418 true 00:07:14.418 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:14.418 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.677 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.677 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:14.677 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:14.936 true 00:07:14.936 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:14.936 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.195 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.454 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:15.454 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:15.713 true 00:07:15.713 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:15.713 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.713 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.972 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:15.972 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:16.231 true 00:07:16.231 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:16.231 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.491 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.750 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:16.750 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:17.009 true 00:07:17.009 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:17.009 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.267 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.267 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:17.267 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:17.526 true 00:07:17.526 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:17.527 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.785 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.044 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:18.044 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:18.303 true 00:07:18.303 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:18.303 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.303 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.561 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:18.561 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:18.819 true 00:07:18.819 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:18.819 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.078 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.337 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:19.337 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:19.337 true 00:07:19.596 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:19.596 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.596 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.854 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:19.854 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:20.112 true 00:07:20.112 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:20.112 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.369 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.627 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:20.627 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:20.627 true 00:07:20.885 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:20.885 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.885 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.143 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:21.143 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:21.401 true 00:07:21.401 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:21.401 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.659 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.917 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:21.917 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:21.917 true 00:07:21.917 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:21.917 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.175 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.433 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:22.433 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:22.692 true 00:07:22.692 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:22.692 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.950 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.208 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:23.208 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:23.208 true 00:07:23.208 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:23.466 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.466 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.724 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:23.724 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:23.982 true 00:07:23.982 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:23.982 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.240 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.519 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:24.519 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:24.519 true 00:07:24.814 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:24.814 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.814 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.094 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:25.094 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:25.363 true 00:07:25.363 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:25.363 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.621 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.621 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:25.621 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:25.878 true 00:07:25.878 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:25.878 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.134 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.392 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:26.392 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:26.649 true 00:07:26.649 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:26.649 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.907 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.907 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:26.907 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:27.165 true 00:07:27.165 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:27.165 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.423 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.680 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:27.680 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:27.938 true 00:07:27.938 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:27.938 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.195 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.453 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:28.453 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:28.453 true 00:07:28.453 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:28.453 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.710 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.968 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:28.968 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:29.225 true 00:07:29.225 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:29.225 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.484 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.484 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:29.484 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:29.743 true 00:07:29.743 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:29.743 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.001 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.258 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:30.258 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:30.515 true 00:07:30.515 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:30.515 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.773 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.773 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:30.773 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:31.031 true 00:07:31.031 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:31.031 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.288 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.545 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:31.545 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:31.804 true 00:07:31.804 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:31.804 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.062 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.062 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:32.062 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:32.320 true 00:07:32.320 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:32.320 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.578 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.836 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:32.836 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:33.094 true 00:07:33.094 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:33.094 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.351 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.608 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:33.608 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:33.608 true 00:07:33.608 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:33.608 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.865 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.123 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:34.123 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:34.381 true 00:07:34.381 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:34.381 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.639 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.897 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:34.897 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:34.897 true 00:07:34.897 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:34.897 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.155 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.413 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:35.413 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:35.671 true 00:07:35.671 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:35.671 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.928 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.186 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:36.186 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:36.186 true 00:07:36.186 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:36.186 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.444 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.702 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:36.702 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:36.960 true 00:07:36.960 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:36.960 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.960 Initializing NVMe Controllers 00:07:36.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.960 Controller IO queue size 128, less than required. 00:07:36.960 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:36.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:36.960 Initialization complete. Launching workers. 00:07:36.960 ======================================================== 00:07:36.960 Latency(us) 00:07:36.960 Device Information : IOPS MiB/s Average min max 00:07:36.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23156.57 11.31 5528.37 2904.29 44835.44 00:07:36.960 ======================================================== 00:07:36.960 Total : 23156.57 11.31 5528.37 2904.29 44835.44 00:07:36.960 00:07:37.218 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.218 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:37.218 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:37.554 true 00:07:37.554 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475738 00:07:37.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3475738) - No such process 00:07:37.555 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3475738 00:07:37.555 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.813 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.813 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:37.813 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:37.813 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:37.813 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.813 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:38.071 null0 00:07:38.071 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.071 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.071 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:38.328 null1 00:07:38.328 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.328 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.328 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:38.586 null2 00:07:38.586 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.586 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.586 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:38.586 null3 00:07:38.843 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.843 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.843 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:38.843 null4 00:07:38.843 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.843 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.843 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:39.101 null5 00:07:39.101 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:39.101 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:39.101 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:39.359 null6 00:07:39.359 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:39.359 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:39.359 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:39.617 null7 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:39.617 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3481428 3481430 3481431 3481433 3481435 3481437 3481439 3481441 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.618 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.876 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.877 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.134 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.134 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.135 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.135 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.135 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.135 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.135 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.135 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.393 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.651 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.651 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.651 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.652 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.910 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.169 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.427 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.427 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.427 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.427 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.427 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.427 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.427 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.427 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.686 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.944 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.945 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.203 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.461 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.719 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.719 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.719 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.719 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.719 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.719 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.719 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.719 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.978 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:43.236 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:43.236 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:43.236 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:43.236 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:43.236 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.236 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.236 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.236 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.237 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:43.495 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:43.495 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:43.495 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:43.495 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.495 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:43.495 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:43.495 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.495 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.753 rmmod nvme_tcp 00:07:43.753 rmmod nvme_fabrics 00:07:43.753 rmmod nvme_keyring 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3475234 ']' 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3475234 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3475234 ']' 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3475234 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.753 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3475234 00:07:44.012 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:44.012 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:44.012 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3475234' 00:07:44.012 killing process with pid 3475234 00:07:44.012 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3475234 00:07:44.012 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3475234 00:07:45.386 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.387 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.287 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:47.287 00:07:47.287 real 0m49.155s 00:07:47.287 user 3m27.226s 00:07:47.287 sys 0m17.004s 00:07:47.287 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.287 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 ************************************ 00:07:47.287 END TEST nvmf_ns_hotplug_stress 00:07:47.287 ************************************ 00:07:47.287 12:10:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:47.287 12:10:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:47.287 12:10:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.287 12:10:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 ************************************ 00:07:47.288 START TEST nvmf_delete_subsystem 00:07:47.288 ************************************ 00:07:47.288 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:47.288 * Looking for test storage... 00:07:47.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.288 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:47.288 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:47.288 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.546 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:47.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.547 --rc genhtml_branch_coverage=1 00:07:47.547 --rc genhtml_function_coverage=1 00:07:47.547 --rc genhtml_legend=1 00:07:47.547 --rc geninfo_all_blocks=1 00:07:47.547 --rc geninfo_unexecuted_blocks=1 00:07:47.547 00:07:47.547 ' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:47.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.547 --rc genhtml_branch_coverage=1 00:07:47.547 --rc genhtml_function_coverage=1 00:07:47.547 --rc genhtml_legend=1 00:07:47.547 --rc geninfo_all_blocks=1 00:07:47.547 --rc geninfo_unexecuted_blocks=1 00:07:47.547 00:07:47.547 ' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:47.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.547 --rc genhtml_branch_coverage=1 00:07:47.547 --rc genhtml_function_coverage=1 00:07:47.547 --rc genhtml_legend=1 00:07:47.547 --rc geninfo_all_blocks=1 00:07:47.547 --rc geninfo_unexecuted_blocks=1 00:07:47.547 00:07:47.547 ' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:47.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.547 --rc genhtml_branch_coverage=1 00:07:47.547 --rc genhtml_function_coverage=1 00:07:47.547 --rc genhtml_legend=1 00:07:47.547 --rc geninfo_all_blocks=1 00:07:47.547 --rc geninfo_unexecuted_blocks=1 00:07:47.547 00:07:47.547 ' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.548 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:47.548 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:47.548 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.548 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:52.812 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.812 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:52.813 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:52.813 Found net devices under 0000:af:00.0: cvl_0_0 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:52.813 Found net devices under 0000:af:00.1: cvl_0_1 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:52.813 00:07:52.813 --- 10.0.0.2 ping statistics --- 00:07:52.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.813 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:52.813 00:07:52.813 --- 10.0.0.1 ping statistics --- 00:07:52.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.813 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3485950 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3485950 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3485950 ']' 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.813 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 [2024-12-10 12:10:59.596563] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:52.813 [2024-12-10 12:10:59.596650] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.071 [2024-12-10 12:10:59.712803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.071 [2024-12-10 12:10:59.818381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.071 [2024-12-10 12:10:59.818425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.071 [2024-12-10 12:10:59.818435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.071 [2024-12-10 12:10:59.818447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.071 [2024-12-10 12:10:59.818455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.072 [2024-12-10 12:10:59.820450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.072 [2024-12-10 12:10:59.820452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.637 [2024-12-10 12:11:00.433352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.637 [2024-12-10 12:11:00.449565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.637 NULL1 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.637 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.895 Delay0 00:07:53.895 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.895 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.895 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.895 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.895 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.895 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3485991 00:07:53.895 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:53.895 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:53.895 [2024-12-10 12:11:00.575134] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:55.792 12:11:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.792 12:11:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.793 12:11:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 [2024-12-10 12:11:02.748783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 starting I/O failed: -6 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 [2024-12-10 12:11:02.750857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Write completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.051 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 [2024-12-10 12:11:02.751828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Read completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 Write completed with error (sct=0, sc=8) 00:07:56.052 [2024-12-10 12:11:02.752833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:07:56.985 [2024-12-10 12:11:03.713468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 [2024-12-10 12:11:03.753238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 [2024-12-10 12:11:03.753977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 [2024-12-10 12:11:03.754877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(6) to be set 00:07:56.985 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 Read completed with error (sct=0, sc=8) 00:07:56.985 Write completed with error (sct=0, sc=8) 00:07:56.985 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:56.985 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3485991 00:07:56.985 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:56.985 [2024-12-10 12:11:03.763644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:07:56.985 Initializing NVMe Controllers 00:07:56.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:56.985 Controller IO queue size 128, less than required. 00:07:56.985 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:56.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:56.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:56.985 Initialization complete. Launching workers. 00:07:56.985 ======================================================== 00:07:56.985 Latency(us) 00:07:56.985 Device Information : IOPS MiB/s Average min max 00:07:56.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.21 0.09 964314.81 2353.79 1045926.33 00:07:56.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.92 0.07 910044.36 984.45 1012166.46 00:07:56.985 ======================================================== 00:07:56.985 Total : 323.13 0.16 939472.11 984.45 1045926.33 00:07:56.985 00:07:56.985 [2024-12-10 12:11:03.765337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e080 (9): Bad file descriptor 00:07:56.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3485991 00:07:57.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3485991) - No such process 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3485991 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3485991 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3485991 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.551 [2024-12-10 12:11:04.288639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3486664 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3486664 00:07:57.551 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.809 [2024-12-10 12:11:04.399870] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:58.066 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.066 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3486664 00:07:58.066 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.631 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.631 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3486664 00:07:58.631 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.195 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.195 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3486664 00:07:59.195 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.761 12:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.761 12:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3486664 00:07:59.761 12:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:00.018 12:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:00.018 12:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3486664 00:08:00.018 12:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:00.584 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:00.584 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3486664 00:08:00.584 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:00.842 Initializing NVMe Controllers 00:08:00.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:00.842 Controller IO queue size 128, less than required. 00:08:00.842 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:00.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:00.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:00.842 Initialization complete. Launching workers. 00:08:00.842 ======================================================== 00:08:00.842 Latency(us) 00:08:00.842 Device Information : IOPS MiB/s Average min max 00:08:00.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003637.08 1000163.20 1042458.19 00:08:00.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005038.82 1000286.95 1011867.25 00:08:00.842 ======================================================== 00:08:00.842 Total : 256.00 0.12 1004337.95 1000163.20 1042458.19 00:08:00.842 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3486664 00:08:01.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3486664) - No such process 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3486664 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.100 rmmod nvme_tcp 00:08:01.100 rmmod nvme_fabrics 00:08:01.100 rmmod nvme_keyring 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3485950 ']' 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3485950 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3485950 ']' 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3485950 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.100 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3485950 00:08:01.358 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.358 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.358 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3485950' 00:08:01.358 killing process with pid 3485950 00:08:01.358 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3485950 00:08:01.358 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3485950 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.292 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:04.824 00:08:04.824 real 0m17.136s 00:08:04.824 user 0m31.917s 00:08:04.824 sys 0m5.164s 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.824 ************************************ 00:08:04.824 END TEST nvmf_delete_subsystem 00:08:04.824 ************************************ 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.824 ************************************ 00:08:04.824 START TEST nvmf_host_management 00:08:04.824 ************************************ 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:04.824 * Looking for test storage... 00:08:04.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.824 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.825 --rc genhtml_branch_coverage=1 00:08:04.825 --rc genhtml_function_coverage=1 00:08:04.825 --rc genhtml_legend=1 00:08:04.825 --rc geninfo_all_blocks=1 00:08:04.825 --rc geninfo_unexecuted_blocks=1 00:08:04.825 00:08:04.825 ' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.825 --rc genhtml_branch_coverage=1 00:08:04.825 --rc genhtml_function_coverage=1 00:08:04.825 --rc genhtml_legend=1 00:08:04.825 --rc geninfo_all_blocks=1 00:08:04.825 --rc geninfo_unexecuted_blocks=1 00:08:04.825 00:08:04.825 ' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.825 --rc genhtml_branch_coverage=1 00:08:04.825 --rc genhtml_function_coverage=1 00:08:04.825 --rc genhtml_legend=1 00:08:04.825 --rc geninfo_all_blocks=1 00:08:04.825 --rc geninfo_unexecuted_blocks=1 00:08:04.825 00:08:04.825 ' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.825 --rc genhtml_branch_coverage=1 00:08:04.825 --rc genhtml_function_coverage=1 00:08:04.825 --rc genhtml_legend=1 00:08:04.825 --rc geninfo_all_blocks=1 00:08:04.825 --rc geninfo_unexecuted_blocks=1 00:08:04.825 00:08:04.825 ' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.825 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:04.826 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:04.826 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:04.826 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.149 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:10.150 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:10.150 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:10.150 Found net devices under 0000:af:00.0: cvl_0_0 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:10.150 Found net devices under 0000:af:00.1: cvl_0_1 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:10.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:08:10.150 00:08:10.150 --- 10.0.0.2 ping statistics --- 00:08:10.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.150 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:08:10.150 00:08:10.150 --- 10.0.0.1 ping statistics --- 00:08:10.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.150 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:10.150 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3490826 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3490826 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3490826 ']' 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.151 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.448 [2024-12-10 12:11:16.961092] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:10.448 [2024-12-10 12:11:16.961212] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.448 [2024-12-10 12:11:17.080881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.448 [2024-12-10 12:11:17.186472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.448 [2024-12-10 12:11:17.186517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.448 [2024-12-10 12:11:17.186526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.448 [2024-12-10 12:11:17.186552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.448 [2024-12-10 12:11:17.186560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.448 [2024-12-10 12:11:17.188831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.448 [2024-12-10 12:11:17.188915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.448 [2024-12-10 12:11:17.189013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.448 [2024-12-10 12:11:17.189035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.047 [2024-12-10 12:11:17.812899] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.047 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.305 Malloc0 00:08:11.305 [2024-12-10 12:11:17.934199] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3491080 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3491080 /var/tmp/bdevperf.sock 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3491080 ']' 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:11.305 { 00:08:11.305 "params": { 00:08:11.305 "name": "Nvme$subsystem", 00:08:11.305 "trtype": "$TEST_TRANSPORT", 00:08:11.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.305 "adrfam": "ipv4", 00:08:11.305 "trsvcid": "$NVMF_PORT", 00:08:11.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.305 "hdgst": ${hdgst:-false}, 00:08:11.305 "ddgst": ${ddgst:-false} 00:08:11.305 }, 00:08:11.305 "method": "bdev_nvme_attach_controller" 00:08:11.305 } 00:08:11.305 EOF 00:08:11.305 )") 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:11.305 12:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:11.305 "params": { 00:08:11.305 "name": "Nvme0", 00:08:11.305 "trtype": "tcp", 00:08:11.305 "traddr": "10.0.0.2", 00:08:11.305 "adrfam": "ipv4", 00:08:11.305 "trsvcid": "4420", 00:08:11.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:11.305 "hdgst": false, 00:08:11.305 "ddgst": false 00:08:11.305 }, 00:08:11.305 "method": "bdev_nvme_attach_controller" 00:08:11.305 }' 00:08:11.305 [2024-12-10 12:11:18.055412] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:11.305 [2024-12-10 12:11:18.055498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491080 ] 00:08:11.564 [2024-12-10 12:11:18.168797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.564 [2024-12-10 12:11:18.281844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.130 Running I/O for 10 seconds... 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:12.130 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:12.387 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:12.388 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:12.388 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:12.388 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.388 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.388 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=614 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 614 -ge 100 ']' 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.647 [2024-12-10 12:11:19.246840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.246993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.247001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.247008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.247016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.247024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 [2024-12-10 12:11:19.247032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.647 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.647 [2024-12-10 12:11:19.254443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.647 [2024-12-10 12:11:19.254773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.647 [2024-12-10 12:11:19.254785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.254988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.254998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.648 [2024-12-10 12:11:19.255615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.648 [2024-12-10 12:11:19.255624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.255839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.649 [2024-12-10 12:11:19.255848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.649 [2024-12-10 12:11:19.257129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:12.649 task offset: 90112 on job bdev=Nvme0n1 fails 00:08:12.649 00:08:12.649 Latency(us) 00:08:12.649 [2024-12-10T11:11:19.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.649 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:12.649 Job: Nvme0n1 ended in about 0.42 seconds with error 00:08:12.649 Verification LBA range: start 0x0 length 0x400 00:08:12.649 Nvme0n1 : 0.42 1687.89 105.49 153.44 0.00 33596.41 2075.31 37199.48 00:08:12.649 [2024-12-10T11:11:19.475Z] =================================================================================================================== 00:08:12.649 [2024-12-10T11:11:19.475Z] Total : 1687.89 105.49 153.44 0.00 33596.41 2075.31 37199.48 00:08:12.649 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.649 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:12.649 [2024-12-10 12:11:19.273636] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.649 [2024-12-10 12:11:19.273676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:08:12.649 [2024-12-10 12:11:19.281558] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3491080 00:08:13.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3491080) - No such process 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.582 { 00:08:13.582 "params": { 00:08:13.582 "name": "Nvme$subsystem", 00:08:13.582 "trtype": "$TEST_TRANSPORT", 00:08:13.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.582 "adrfam": "ipv4", 00:08:13.582 "trsvcid": "$NVMF_PORT", 00:08:13.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.582 "hdgst": ${hdgst:-false}, 00:08:13.582 "ddgst": ${ddgst:-false} 00:08:13.582 }, 00:08:13.582 "method": "bdev_nvme_attach_controller" 00:08:13.582 } 00:08:13.582 EOF 00:08:13.582 )") 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:13.582 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.582 "params": { 00:08:13.582 "name": "Nvme0", 00:08:13.582 "trtype": "tcp", 00:08:13.582 "traddr": "10.0.0.2", 00:08:13.582 "adrfam": "ipv4", 00:08:13.582 "trsvcid": "4420", 00:08:13.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:13.582 "hdgst": false, 00:08:13.582 "ddgst": false 00:08:13.582 }, 00:08:13.582 "method": "bdev_nvme_attach_controller" 00:08:13.582 }' 00:08:13.582 [2024-12-10 12:11:20.343694] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:13.582 [2024-12-10 12:11:20.343779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491541 ] 00:08:13.840 [2024-12-10 12:11:20.458386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.840 [2024-12-10 12:11:20.572624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.406 Running I/O for 1 seconds... 00:08:15.598 1728.00 IOPS, 108.00 MiB/s 00:08:15.598 Latency(us) 00:08:15.598 [2024-12-10T11:11:22.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.598 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:15.598 Verification LBA range: start 0x0 length 0x400 00:08:15.598 Nvme0n1 : 1.01 1768.80 110.55 0.00 0.00 35597.84 6179.11 30708.30 00:08:15.598 [2024-12-10T11:11:22.424Z] =================================================================================================================== 00:08:15.598 [2024-12-10T11:11:22.424Z] Total : 1768.80 110.55 0.00 0.00 35597.84 6179.11 30708.30 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.532 rmmod nvme_tcp 00:08:16.532 rmmod nvme_fabrics 00:08:16.532 rmmod nvme_keyring 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3490826 ']' 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3490826 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3490826 ']' 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3490826 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3490826 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3490826' 00:08:16.532 killing process with pid 3490826 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3490826 00:08:16.532 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3490826 00:08:17.907 [2024-12-10 12:11:24.486116] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.907 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.808 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.808 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:19.808 00:08:19.808 real 0m15.424s 00:08:19.808 user 0m34.448s 00:08:19.808 sys 0m5.467s 00:08:19.808 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.808 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.808 ************************************ 00:08:19.808 END TEST nvmf_host_management 00:08:19.808 ************************************ 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.068 ************************************ 00:08:20.068 START TEST nvmf_lvol 00:08:20.068 ************************************ 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:20.068 * Looking for test storage... 00:08:20.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.068 --rc genhtml_branch_coverage=1 00:08:20.068 --rc genhtml_function_coverage=1 00:08:20.068 --rc genhtml_legend=1 00:08:20.068 --rc geninfo_all_blocks=1 00:08:20.068 --rc geninfo_unexecuted_blocks=1 00:08:20.068 00:08:20.068 ' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.068 --rc genhtml_branch_coverage=1 00:08:20.068 --rc genhtml_function_coverage=1 00:08:20.068 --rc genhtml_legend=1 00:08:20.068 --rc geninfo_all_blocks=1 00:08:20.068 --rc geninfo_unexecuted_blocks=1 00:08:20.068 00:08:20.068 ' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.068 --rc genhtml_branch_coverage=1 00:08:20.068 --rc genhtml_function_coverage=1 00:08:20.068 --rc genhtml_legend=1 00:08:20.068 --rc geninfo_all_blocks=1 00:08:20.068 --rc geninfo_unexecuted_blocks=1 00:08:20.068 00:08:20.068 ' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.068 --rc genhtml_branch_coverage=1 00:08:20.068 --rc genhtml_function_coverage=1 00:08:20.068 --rc genhtml_legend=1 00:08:20.068 --rc geninfo_all_blocks=1 00:08:20.068 --rc geninfo_unexecuted_blocks=1 00:08:20.068 00:08:20.068 ' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.068 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.069 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.069 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.069 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.069 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.069 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.069 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.069 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.327 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:25.596 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:25.596 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:25.596 Found net devices under 0000:af:00.0: cvl_0_0 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:25.596 Found net devices under 0000:af:00.1: cvl_0_1 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:25.596 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:25.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:08:25.596 00:08:25.596 --- 10.0.0.2 ping statistics --- 00:08:25.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.597 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:08:25.597 00:08:25.597 --- 10.0.0.1 ping statistics --- 00:08:25.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.597 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3495692 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3495692 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3495692 ']' 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.597 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.855 [2024-12-10 12:11:32.453434] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:25.855 [2024-12-10 12:11:32.453519] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.855 [2024-12-10 12:11:32.570482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.113 [2024-12-10 12:11:32.684075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.113 [2024-12-10 12:11:32.684120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.113 [2024-12-10 12:11:32.684131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.113 [2024-12-10 12:11:32.684142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.113 [2024-12-10 12:11:32.684150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.113 [2024-12-10 12:11:32.690199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.113 [2024-12-10 12:11:32.690221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.113 [2024-12-10 12:11:32.690215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.678 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.678 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:26.678 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.678 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.678 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.678 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.678 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:26.678 [2024-12-10 12:11:33.473313] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.678 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.244 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:27.244 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.244 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:27.244 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:27.502 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:27.760 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6076041e-c5cb-475a-ab7a-d0648f6efde0 00:08:27.760 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6076041e-c5cb-475a-ab7a-d0648f6efde0 lvol 20 00:08:28.018 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5a56d4e2-3d48-42fe-999e-0056a74e91c6 00:08:28.018 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.275 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5a56d4e2-3d48-42fe-999e-0056a74e91c6 00:08:28.275 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.532 [2024-12-10 12:11:35.203674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.532 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.790 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3496181 00:08:28.790 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:28.790 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:29.724 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5a56d4e2-3d48-42fe-999e-0056a74e91c6 MY_SNAPSHOT 00:08:29.982 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cec25ff2-f762-4d7c-a784-fff350baf293 00:08:29.982 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5a56d4e2-3d48-42fe-999e-0056a74e91c6 30 00:08:30.238 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cec25ff2-f762-4d7c-a784-fff350baf293 MY_CLONE 00:08:30.496 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=29a378a7-637f-4a45-9945-c6d762870aad 00:08:30.496 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 29a378a7-637f-4a45-9945-c6d762870aad 00:08:31.062 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3496181 00:08:39.168 Initializing NVMe Controllers 00:08:39.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:39.168 Controller IO queue size 128, less than required. 00:08:39.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:39.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:39.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:39.168 Initialization complete. Launching workers. 00:08:39.168 ======================================================== 00:08:39.168 Latency(us) 00:08:39.168 Device Information : IOPS MiB/s Average min max 00:08:39.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11295.50 44.12 11336.77 236.44 129648.45 00:08:39.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10970.40 42.85 11664.55 4193.80 144853.76 00:08:39.168 ======================================================== 00:08:39.168 Total : 22265.90 86.98 11498.27 236.44 144853.76 00:08:39.168 00:08:39.168 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:39.426 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5a56d4e2-3d48-42fe-999e-0056a74e91c6 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6076041e-c5cb-475a-ab7a-d0648f6efde0 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.684 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.684 rmmod nvme_tcp 00:08:39.684 rmmod nvme_fabrics 00:08:39.942 rmmod nvme_keyring 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3495692 ']' 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3495692 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3495692 ']' 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3495692 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3495692 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3495692' 00:08:39.942 killing process with pid 3495692 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3495692 00:08:39.942 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3495692 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.315 12:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:43.849 00:08:43.849 real 0m23.487s 00:08:43.849 user 1m8.491s 00:08:43.849 sys 0m7.197s 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.849 ************************************ 00:08:43.849 END TEST nvmf_lvol 00:08:43.849 ************************************ 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.849 ************************************ 00:08:43.849 START TEST nvmf_lvs_grow 00:08:43.849 ************************************ 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:43.849 * Looking for test storage... 00:08:43.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.849 --rc genhtml_branch_coverage=1 00:08:43.849 --rc genhtml_function_coverage=1 00:08:43.849 --rc genhtml_legend=1 00:08:43.849 --rc geninfo_all_blocks=1 00:08:43.849 --rc geninfo_unexecuted_blocks=1 00:08:43.849 00:08:43.849 ' 00:08:43.849 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.850 --rc genhtml_branch_coverage=1 00:08:43.850 --rc genhtml_function_coverage=1 00:08:43.850 --rc genhtml_legend=1 00:08:43.850 --rc geninfo_all_blocks=1 00:08:43.850 --rc geninfo_unexecuted_blocks=1 00:08:43.850 00:08:43.850 ' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.850 --rc genhtml_branch_coverage=1 00:08:43.850 --rc genhtml_function_coverage=1 00:08:43.850 --rc genhtml_legend=1 00:08:43.850 --rc geninfo_all_blocks=1 00:08:43.850 --rc geninfo_unexecuted_blocks=1 00:08:43.850 00:08:43.850 ' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.850 --rc genhtml_branch_coverage=1 00:08:43.850 --rc genhtml_function_coverage=1 00:08:43.850 --rc genhtml_legend=1 00:08:43.850 --rc geninfo_all_blocks=1 00:08:43.850 --rc geninfo_unexecuted_blocks=1 00:08:43.850 00:08:43.850 ' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:43.850 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:49.117 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:49.117 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:49.117 Found net devices under 0000:af:00.0: cvl_0_0 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:49.117 Found net devices under 0000:af:00.1: cvl_0_1 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.117 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:08:49.118 00:08:49.118 --- 10.0.0.2 ping statistics --- 00:08:49.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.118 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:08:49.118 00:08:49.118 --- 10.0.0.1 ping statistics --- 00:08:49.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.118 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.118 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3501673 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3501673 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3501673 ']' 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.376 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 [2024-12-10 12:11:56.018773] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:49.376 [2024-12-10 12:11:56.018863] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.376 [2024-12-10 12:11:56.134622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.634 [2024-12-10 12:11:56.246530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.634 [2024-12-10 12:11:56.246570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.634 [2024-12-10 12:11:56.246581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.634 [2024-12-10 12:11:56.246592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.634 [2024-12-10 12:11:56.246601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.634 [2024-12-10 12:11:56.248043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.199 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.199 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:50.199 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:50.199 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.199 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.199 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.199 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.457 [2024-12-10 12:11:57.026847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.457 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:50.457 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.457 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.457 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.457 ************************************ 00:08:50.457 START TEST lvs_grow_clean 00:08:50.457 ************************************ 00:08:50.457 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:50.457 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:50.458 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:50.458 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:50.458 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:50.458 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:50.458 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:50.458 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.458 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.458 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:50.716 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:50.716 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:50.716 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=12206a6f-87e0-4f8d-b688-e636023c7b24 00:08:50.716 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:08:50.716 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:50.973 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:50.973 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:50.973 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12206a6f-87e0-4f8d-b688-e636023c7b24 lvol 150 00:08:51.232 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=baaddfa8-b3bc-4c09-acfe-91c498199e17 00:08:51.232 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.232 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:51.232 [2024-12-10 12:11:58.025981] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:51.232 [2024-12-10 12:11:58.026074] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:51.232 true 00:08:51.232 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:08:51.232 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:51.490 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:51.490 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.748 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 baaddfa8-b3bc-4c09-acfe-91c498199e17 00:08:52.006 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:52.006 [2024-12-10 12:11:58.744315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.006 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3502173 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3502173 /var/tmp/bdevperf.sock 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3502173 ']' 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.264 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:52.264 [2024-12-10 12:11:59.001300] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:52.264 [2024-12-10 12:11:59.001402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502173 ] 00:08:52.522 [2024-12-10 12:11:59.114040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.522 [2024-12-10 12:11:59.225768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.088 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.088 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:53.088 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:53.345 Nvme0n1 00:08:53.345 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:53.604 [ 00:08:53.604 { 00:08:53.604 "name": "Nvme0n1", 00:08:53.604 "aliases": [ 00:08:53.604 "baaddfa8-b3bc-4c09-acfe-91c498199e17" 00:08:53.604 ], 00:08:53.604 "product_name": "NVMe disk", 00:08:53.604 "block_size": 4096, 00:08:53.604 "num_blocks": 38912, 00:08:53.604 "uuid": "baaddfa8-b3bc-4c09-acfe-91c498199e17", 00:08:53.604 "numa_id": 1, 00:08:53.604 "assigned_rate_limits": { 00:08:53.604 "rw_ios_per_sec": 0, 00:08:53.604 "rw_mbytes_per_sec": 0, 00:08:53.604 "r_mbytes_per_sec": 0, 00:08:53.604 "w_mbytes_per_sec": 0 00:08:53.604 }, 00:08:53.604 "claimed": false, 00:08:53.604 "zoned": false, 00:08:53.604 "supported_io_types": { 00:08:53.604 "read": true, 00:08:53.604 "write": true, 00:08:53.604 "unmap": true, 00:08:53.605 "flush": true, 00:08:53.605 "reset": true, 00:08:53.605 "nvme_admin": true, 00:08:53.605 "nvme_io": true, 00:08:53.605 "nvme_io_md": false, 00:08:53.605 "write_zeroes": true, 00:08:53.605 "zcopy": false, 00:08:53.605 "get_zone_info": false, 00:08:53.605 "zone_management": false, 00:08:53.605 "zone_append": false, 00:08:53.605 "compare": true, 00:08:53.605 "compare_and_write": true, 00:08:53.605 "abort": true, 00:08:53.605 "seek_hole": false, 00:08:53.605 "seek_data": false, 00:08:53.605 "copy": true, 00:08:53.605 "nvme_iov_md": false 00:08:53.605 }, 00:08:53.605 "memory_domains": [ 00:08:53.605 { 00:08:53.605 "dma_device_id": "system", 00:08:53.605 "dma_device_type": 1 00:08:53.605 } 00:08:53.605 ], 00:08:53.605 "driver_specific": { 00:08:53.605 "nvme": [ 00:08:53.605 { 00:08:53.605 "trid": { 00:08:53.605 "trtype": "TCP", 00:08:53.605 "adrfam": "IPv4", 00:08:53.605 "traddr": "10.0.0.2", 00:08:53.605 "trsvcid": "4420", 00:08:53.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:53.605 }, 00:08:53.605 "ctrlr_data": { 00:08:53.605 "cntlid": 1, 00:08:53.605 "vendor_id": "0x8086", 00:08:53.605 "model_number": "SPDK bdev Controller", 00:08:53.605 "serial_number": "SPDK0", 00:08:53.605 "firmware_revision": "25.01", 00:08:53.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:53.605 "oacs": { 00:08:53.605 "security": 0, 00:08:53.605 "format": 0, 00:08:53.605 "firmware": 0, 00:08:53.605 "ns_manage": 0 00:08:53.605 }, 00:08:53.605 "multi_ctrlr": true, 00:08:53.605 "ana_reporting": false 00:08:53.605 }, 00:08:53.605 "vs": { 00:08:53.605 "nvme_version": "1.3" 00:08:53.605 }, 00:08:53.605 "ns_data": { 00:08:53.605 "id": 1, 00:08:53.605 "can_share": true 00:08:53.605 } 00:08:53.605 } 00:08:53.605 ], 00:08:53.605 "mp_policy": "active_passive" 00:08:53.605 } 00:08:53.605 } 00:08:53.605 ] 00:08:53.605 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3502435 00:08:53.605 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:53.605 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:53.605 Running I/O for 10 seconds... 00:08:54.980 Latency(us) 00:08:54.980 [2024-12-10T11:12:01.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.980 Nvme0n1 : 1.00 20163.00 78.76 0.00 0.00 0.00 0.00 0.00 00:08:54.980 [2024-12-10T11:12:01.806Z] =================================================================================================================== 00:08:54.980 [2024-12-10T11:12:01.806Z] Total : 20163.00 78.76 0.00 0.00 0.00 0.00 0.00 00:08:54.980 00:08:55.546 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:08:55.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.804 Nvme0n1 : 2.00 20371.00 79.57 0.00 0.00 0.00 0.00 0.00 00:08:55.804 [2024-12-10T11:12:02.630Z] =================================================================================================================== 00:08:55.804 [2024-12-10T11:12:02.630Z] Total : 20371.00 79.57 0.00 0.00 0.00 0.00 0.00 00:08:55.804 00:08:55.804 true 00:08:55.805 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:08:55.805 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:56.062 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:56.062 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:56.062 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3502435 00:08:56.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.628 Nvme0n1 : 3.00 20418.00 79.76 0.00 0.00 0.00 0.00 0.00 00:08:56.628 [2024-12-10T11:12:03.454Z] =================================================================================================================== 00:08:56.628 [2024-12-10T11:12:03.454Z] Total : 20418.00 79.76 0.00 0.00 0.00 0.00 0.00 00:08:56.628 00:08:58.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.005 Nvme0n1 : 4.00 20520.50 80.16 0.00 0.00 0.00 0.00 0.00 00:08:58.005 [2024-12-10T11:12:04.831Z] =================================================================================================================== 00:08:58.005 [2024-12-10T11:12:04.831Z] Total : 20520.50 80.16 0.00 0.00 0.00 0.00 0.00 00:08:58.005 00:08:58.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.939 Nvme0n1 : 5.00 20571.00 80.36 0.00 0.00 0.00 0.00 0.00 00:08:58.939 [2024-12-10T11:12:05.766Z] =================================================================================================================== 00:08:58.940 [2024-12-10T11:12:05.766Z] Total : 20571.00 80.36 0.00 0.00 0.00 0.00 0.00 00:08:58.940 00:08:59.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.875 Nvme0n1 : 6.00 20530.17 80.20 0.00 0.00 0.00 0.00 0.00 00:08:59.875 [2024-12-10T11:12:06.701Z] =================================================================================================================== 00:08:59.875 [2024-12-10T11:12:06.701Z] Total : 20530.17 80.20 0.00 0.00 0.00 0.00 0.00 00:08:59.875 00:09:00.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.811 Nvme0n1 : 7.00 20530.86 80.20 0.00 0.00 0.00 0.00 0.00 00:09:00.811 [2024-12-10T11:12:07.637Z] =================================================================================================================== 00:09:00.811 [2024-12-10T11:12:07.637Z] Total : 20530.86 80.20 0.00 0.00 0.00 0.00 0.00 00:09:00.811 00:09:01.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.747 Nvme0n1 : 8.00 20556.25 80.30 0.00 0.00 0.00 0.00 0.00 00:09:01.747 [2024-12-10T11:12:08.573Z] =================================================================================================================== 00:09:01.747 [2024-12-10T11:12:08.573Z] Total : 20556.25 80.30 0.00 0.00 0.00 0.00 0.00 00:09:01.747 00:09:02.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.682 Nvme0n1 : 9.00 20594.11 80.45 0.00 0.00 0.00 0.00 0.00 00:09:02.682 [2024-12-10T11:12:09.508Z] =================================================================================================================== 00:09:02.682 [2024-12-10T11:12:09.508Z] Total : 20594.11 80.45 0.00 0.00 0.00 0.00 0.00 00:09:02.682 00:09:03.761 00:09:03.761 Latency(us) 00:09:03.761 [2024-12-10T11:12:10.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.761 Nvme0n1 : 10.00 20582.65 80.40 0.00 0.00 6215.58 2699.46 16852.11 00:09:03.761 [2024-12-10T11:12:10.587Z] =================================================================================================================== 00:09:03.761 [2024-12-10T11:12:10.587Z] Total : 20582.65 80.40 0.00 0.00 6215.58 2699.46 16852.11 00:09:03.761 { 00:09:03.761 "results": [ 00:09:03.761 { 00:09:03.761 "job": "Nvme0n1", 00:09:03.761 "core_mask": "0x2", 00:09:03.761 "workload": "randwrite", 00:09:03.761 "status": "finished", 00:09:03.761 "queue_depth": 128, 00:09:03.761 "io_size": 4096, 00:09:03.761 "runtime": 10.002649, 00:09:03.761 "iops": 20582.647656635756, 00:09:03.761 "mibps": 80.40096740873342, 00:09:03.761 "io_failed": 0, 00:09:03.761 "io_timeout": 0, 00:09:03.761 "avg_latency_us": 6215.5776379189, 00:09:03.761 "min_latency_us": 2699.4590476190474, 00:09:03.761 "max_latency_us": 16852.114285714284 00:09:03.761 } 00:09:03.761 ], 00:09:03.761 "core_count": 1 00:09:03.761 } 00:09:03.761 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3502173 00:09:03.761 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3502173 ']' 00:09:03.761 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3502173 00:09:03.761 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:03.761 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.761 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3502173 00:09:03.762 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:03.762 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:03.762 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3502173' 00:09:03.762 killing process with pid 3502173 00:09:03.762 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3502173 00:09:03.762 Received shutdown signal, test time was about 10.000000 seconds 00:09:03.762 00:09:03.762 Latency(us) 00:09:03.762 [2024-12-10T11:12:10.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.762 [2024-12-10T11:12:10.588Z] =================================================================================================================== 00:09:03.762 [2024-12-10T11:12:10.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:03.762 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3502173 00:09:04.699 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.958 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.958 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:09:04.958 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:05.216 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:05.216 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:05.217 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.475 [2024-12-10 12:12:12.134007] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:05.475 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:09:05.733 request: 00:09:05.733 { 00:09:05.733 "uuid": "12206a6f-87e0-4f8d-b688-e636023c7b24", 00:09:05.733 "method": "bdev_lvol_get_lvstores", 00:09:05.733 "req_id": 1 00:09:05.733 } 00:09:05.733 Got JSON-RPC error response 00:09:05.733 response: 00:09:05.733 { 00:09:05.733 "code": -19, 00:09:05.733 "message": "No such device" 00:09:05.733 } 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:05.733 aio_bdev 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev baaddfa8-b3bc-4c09-acfe-91c498199e17 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=baaddfa8-b3bc-4c09-acfe-91c498199e17 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.733 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:05.991 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b baaddfa8-b3bc-4c09-acfe-91c498199e17 -t 2000 00:09:06.250 [ 00:09:06.250 { 00:09:06.250 "name": "baaddfa8-b3bc-4c09-acfe-91c498199e17", 00:09:06.250 "aliases": [ 00:09:06.250 "lvs/lvol" 00:09:06.250 ], 00:09:06.250 "product_name": "Logical Volume", 00:09:06.250 "block_size": 4096, 00:09:06.250 "num_blocks": 38912, 00:09:06.250 "uuid": "baaddfa8-b3bc-4c09-acfe-91c498199e17", 00:09:06.250 "assigned_rate_limits": { 00:09:06.250 "rw_ios_per_sec": 0, 00:09:06.250 "rw_mbytes_per_sec": 0, 00:09:06.250 "r_mbytes_per_sec": 0, 00:09:06.250 "w_mbytes_per_sec": 0 00:09:06.250 }, 00:09:06.250 "claimed": false, 00:09:06.250 "zoned": false, 00:09:06.250 "supported_io_types": { 00:09:06.250 "read": true, 00:09:06.250 "write": true, 00:09:06.250 "unmap": true, 00:09:06.250 "flush": false, 00:09:06.250 "reset": true, 00:09:06.250 "nvme_admin": false, 00:09:06.250 "nvme_io": false, 00:09:06.250 "nvme_io_md": false, 00:09:06.250 "write_zeroes": true, 00:09:06.250 "zcopy": false, 00:09:06.250 "get_zone_info": false, 00:09:06.250 "zone_management": false, 00:09:06.250 "zone_append": false, 00:09:06.250 "compare": false, 00:09:06.250 "compare_and_write": false, 00:09:06.250 "abort": false, 00:09:06.250 "seek_hole": true, 00:09:06.250 "seek_data": true, 00:09:06.250 "copy": false, 00:09:06.250 "nvme_iov_md": false 00:09:06.250 }, 00:09:06.250 "driver_specific": { 00:09:06.250 "lvol": { 00:09:06.250 "lvol_store_uuid": "12206a6f-87e0-4f8d-b688-e636023c7b24", 00:09:06.250 "base_bdev": "aio_bdev", 00:09:06.250 "thin_provision": false, 00:09:06.250 "num_allocated_clusters": 38, 00:09:06.250 "snapshot": false, 00:09:06.250 "clone": false, 00:09:06.250 "esnap_clone": false 00:09:06.250 } 00:09:06.250 } 00:09:06.250 } 00:09:06.250 ] 00:09:06.250 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:06.250 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:09:06.250 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:06.508 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:06.508 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:09:06.508 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:06.508 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:06.508 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete baaddfa8-b3bc-4c09-acfe-91c498199e17 00:09:06.766 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12206a6f-87e0-4f8d-b688-e636023c7b24 00:09:07.024 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:07.024 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:07.024 00:09:07.024 real 0m16.761s 00:09:07.024 user 0m16.441s 00:09:07.024 sys 0m1.525s 00:09:07.024 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.024 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:07.024 ************************************ 00:09:07.024 END TEST lvs_grow_clean 00:09:07.024 ************************************ 00:09:07.281 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:07.281 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:07.281 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.281 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.281 ************************************ 00:09:07.281 START TEST lvs_grow_dirty 00:09:07.281 ************************************ 00:09:07.281 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:07.282 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.539 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:07.539 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:07.539 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:07.539 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:07.539 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:07.797 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:07.797 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:07.797 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 lvol 150 00:09:08.055 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=537d539c-2ee5-4a39-ad82-e95553d70849 00:09:08.055 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.055 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:08.055 [2024-12-10 12:12:14.878971] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:08.055 [2024-12-10 12:12:14.879047] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:08.316 true 00:09:08.316 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:08.316 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:08.316 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:08.316 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:08.576 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 537d539c-2ee5-4a39-ad82-e95553d70849 00:09:08.835 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:08.835 [2024-12-10 12:12:15.617341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.835 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3505660 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3505660 /var/tmp/bdevperf.sock 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3505660 ']' 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:09.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.093 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.093 [2024-12-10 12:12:15.887874] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:09.093 [2024-12-10 12:12:15.887978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505660 ] 00:09:09.351 [2024-12-10 12:12:15.999160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.351 [2024-12-10 12:12:16.112089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.917 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.917 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:09.917 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:10.483 Nvme0n1 00:09:10.483 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:10.741 [ 00:09:10.741 { 00:09:10.741 "name": "Nvme0n1", 00:09:10.741 "aliases": [ 00:09:10.741 "537d539c-2ee5-4a39-ad82-e95553d70849" 00:09:10.741 ], 00:09:10.741 "product_name": "NVMe disk", 00:09:10.741 "block_size": 4096, 00:09:10.741 "num_blocks": 38912, 00:09:10.741 "uuid": "537d539c-2ee5-4a39-ad82-e95553d70849", 00:09:10.741 "numa_id": 1, 00:09:10.741 "assigned_rate_limits": { 00:09:10.741 "rw_ios_per_sec": 0, 00:09:10.741 "rw_mbytes_per_sec": 0, 00:09:10.741 "r_mbytes_per_sec": 0, 00:09:10.741 "w_mbytes_per_sec": 0 00:09:10.741 }, 00:09:10.741 "claimed": false, 00:09:10.741 "zoned": false, 00:09:10.741 "supported_io_types": { 00:09:10.741 "read": true, 00:09:10.741 "write": true, 00:09:10.741 "unmap": true, 00:09:10.741 "flush": true, 00:09:10.741 "reset": true, 00:09:10.741 "nvme_admin": true, 00:09:10.741 "nvme_io": true, 00:09:10.741 "nvme_io_md": false, 00:09:10.741 "write_zeroes": true, 00:09:10.741 "zcopy": false, 00:09:10.741 "get_zone_info": false, 00:09:10.741 "zone_management": false, 00:09:10.741 "zone_append": false, 00:09:10.741 "compare": true, 00:09:10.741 "compare_and_write": true, 00:09:10.741 "abort": true, 00:09:10.741 "seek_hole": false, 00:09:10.741 "seek_data": false, 00:09:10.741 "copy": true, 00:09:10.741 "nvme_iov_md": false 00:09:10.741 }, 00:09:10.741 "memory_domains": [ 00:09:10.741 { 00:09:10.741 "dma_device_id": "system", 00:09:10.741 "dma_device_type": 1 00:09:10.741 } 00:09:10.741 ], 00:09:10.741 "driver_specific": { 00:09:10.741 "nvme": [ 00:09:10.741 { 00:09:10.741 "trid": { 00:09:10.741 "trtype": "TCP", 00:09:10.741 "adrfam": "IPv4", 00:09:10.741 "traddr": "10.0.0.2", 00:09:10.741 "trsvcid": "4420", 00:09:10.741 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:10.741 }, 00:09:10.741 "ctrlr_data": { 00:09:10.741 "cntlid": 1, 00:09:10.741 "vendor_id": "0x8086", 00:09:10.741 "model_number": "SPDK bdev Controller", 00:09:10.741 "serial_number": "SPDK0", 00:09:10.741 "firmware_revision": "25.01", 00:09:10.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:10.741 "oacs": { 00:09:10.741 "security": 0, 00:09:10.741 "format": 0, 00:09:10.741 "firmware": 0, 00:09:10.741 "ns_manage": 0 00:09:10.741 }, 00:09:10.741 "multi_ctrlr": true, 00:09:10.741 "ana_reporting": false 00:09:10.741 }, 00:09:10.741 "vs": { 00:09:10.741 "nvme_version": "1.3" 00:09:10.741 }, 00:09:10.741 "ns_data": { 00:09:10.741 "id": 1, 00:09:10.741 "can_share": true 00:09:10.741 } 00:09:10.741 } 00:09:10.741 ], 00:09:10.741 "mp_policy": "active_passive" 00:09:10.741 } 00:09:10.741 } 00:09:10.741 ] 00:09:10.741 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:10.741 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3505891 00:09:10.741 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:10.741 Running I/O for 10 seconds... 00:09:11.676 Latency(us) 00:09:11.676 [2024-12-10T11:12:18.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.676 Nvme0n1 : 1.00 20480.00 80.00 0.00 0.00 0.00 0.00 0.00 00:09:11.676 [2024-12-10T11:12:18.502Z] =================================================================================================================== 00:09:11.676 [2024-12-10T11:12:18.502Z] Total : 20480.00 80.00 0.00 0.00 0.00 0.00 0.00 00:09:11.676 00:09:12.610 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:12.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.868 Nvme0n1 : 2.00 20585.00 80.41 0.00 0.00 0.00 0.00 0.00 00:09:12.868 [2024-12-10T11:12:19.694Z] =================================================================================================================== 00:09:12.868 [2024-12-10T11:12:19.694Z] Total : 20585.00 80.41 0.00 0.00 0.00 0.00 0.00 00:09:12.868 00:09:12.868 true 00:09:12.868 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:12.868 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:13.126 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:13.126 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:13.126 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3505891 00:09:13.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.692 Nvme0n1 : 3.00 20475.67 79.98 0.00 0.00 0.00 0.00 0.00 00:09:13.692 [2024-12-10T11:12:20.518Z] =================================================================================================================== 00:09:13.692 [2024-12-10T11:12:20.518Z] Total : 20475.67 79.98 0.00 0.00 0.00 0.00 0.00 00:09:13.692 00:09:14.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.625 Nvme0n1 : 4.00 20550.75 80.28 0.00 0.00 0.00 0.00 0.00 00:09:14.625 [2024-12-10T11:12:21.451Z] =================================================================================================================== 00:09:14.625 [2024-12-10T11:12:21.451Z] Total : 20550.75 80.28 0.00 0.00 0.00 0.00 0.00 00:09:14.625 00:09:15.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.998 Nvme0n1 : 5.00 20612.80 80.52 0.00 0.00 0.00 0.00 0.00 00:09:15.998 [2024-12-10T11:12:22.824Z] =================================================================================================================== 00:09:15.998 [2024-12-10T11:12:22.824Z] Total : 20612.80 80.52 0.00 0.00 0.00 0.00 0.00 00:09:15.998 00:09:16.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.932 Nvme0n1 : 6.00 20645.00 80.64 0.00 0.00 0.00 0.00 0.00 00:09:16.932 [2024-12-10T11:12:23.758Z] =================================================================================================================== 00:09:16.932 [2024-12-10T11:12:23.758Z] Total : 20645.00 80.64 0.00 0.00 0.00 0.00 0.00 00:09:16.932 00:09:17.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.865 Nvme0n1 : 7.00 20698.57 80.85 0.00 0.00 0.00 0.00 0.00 00:09:17.865 [2024-12-10T11:12:24.691Z] =================================================================================================================== 00:09:17.865 [2024-12-10T11:12:24.691Z] Total : 20698.57 80.85 0.00 0.00 0.00 0.00 0.00 00:09:17.865 00:09:18.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.801 Nvme0n1 : 8.00 20723.25 80.95 0.00 0.00 0.00 0.00 0.00 00:09:18.801 [2024-12-10T11:12:25.627Z] =================================================================================================================== 00:09:18.801 [2024-12-10T11:12:25.627Z] Total : 20723.25 80.95 0.00 0.00 0.00 0.00 0.00 00:09:18.801 00:09:19.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.733 Nvme0n1 : 9.00 20756.33 81.08 0.00 0.00 0.00 0.00 0.00 00:09:19.733 [2024-12-10T11:12:26.559Z] =================================================================================================================== 00:09:19.733 [2024-12-10T11:12:26.559Z] Total : 20756.33 81.08 0.00 0.00 0.00 0.00 0.00 00:09:19.733 00:09:20.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.667 Nvme0n1 : 10.00 20783.30 81.18 0.00 0.00 0.00 0.00 0.00 00:09:20.667 [2024-12-10T11:12:27.493Z] =================================================================================================================== 00:09:20.667 [2024-12-10T11:12:27.493Z] Total : 20783.30 81.18 0.00 0.00 0.00 0.00 0.00 00:09:20.667 00:09:20.667 00:09:20.667 Latency(us) 00:09:20.667 [2024-12-10T11:12:27.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.667 Nvme0n1 : 10.01 20782.61 81.18 0.00 0.00 6155.56 3713.71 12795.12 00:09:20.667 [2024-12-10T11:12:27.493Z] =================================================================================================================== 00:09:20.667 [2024-12-10T11:12:27.493Z] Total : 20782.61 81.18 0.00 0.00 6155.56 3713.71 12795.12 00:09:20.667 { 00:09:20.667 "results": [ 00:09:20.667 { 00:09:20.668 "job": "Nvme0n1", 00:09:20.668 "core_mask": "0x2", 00:09:20.668 "workload": "randwrite", 00:09:20.668 "status": "finished", 00:09:20.668 "queue_depth": 128, 00:09:20.668 "io_size": 4096, 00:09:20.668 "runtime": 10.006493, 00:09:20.668 "iops": 20782.605854018984, 00:09:20.668 "mibps": 81.18205411726166, 00:09:20.668 "io_failed": 0, 00:09:20.668 "io_timeout": 0, 00:09:20.668 "avg_latency_us": 6155.561533648365, 00:09:20.668 "min_latency_us": 3713.7066666666665, 00:09:20.668 "max_latency_us": 12795.12380952381 00:09:20.668 } 00:09:20.668 ], 00:09:20.668 "core_count": 1 00:09:20.668 } 00:09:20.668 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3505660 00:09:20.668 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3505660 ']' 00:09:20.668 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3505660 00:09:20.668 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:20.926 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.926 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3505660 00:09:20.926 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:20.926 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:20.926 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3505660' 00:09:20.926 killing process with pid 3505660 00:09:20.926 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3505660 00:09:20.926 Received shutdown signal, test time was about 10.000000 seconds 00:09:20.926 00:09:20.926 Latency(us) 00:09:20.926 [2024-12-10T11:12:27.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.926 [2024-12-10T11:12:27.752Z] =================================================================================================================== 00:09:20.926 [2024-12-10T11:12:27.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:20.926 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3505660 00:09:21.861 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.861 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.119 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:22.119 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3501673 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3501673 00:09:22.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3501673 Killed "${NVMF_APP[@]}" "$@" 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3507832 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3507832 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3507832 ']' 00:09:22.377 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.378 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:22.378 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.378 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.378 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.378 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.378 [2024-12-10 12:12:29.141663] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:22.378 [2024-12-10 12:12:29.141753] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.636 [2024-12-10 12:12:29.263978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.636 [2024-12-10 12:12:29.360756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.636 [2024-12-10 12:12:29.360801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.636 [2024-12-10 12:12:29.360811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.636 [2024-12-10 12:12:29.360838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.636 [2024-12-10 12:12:29.360846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.636 [2024-12-10 12:12:29.362298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.202 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.202 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:23.202 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.202 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.202 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.202 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.202 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:23.469 [2024-12-10 12:12:30.156337] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:23.469 [2024-12-10 12:12:30.156483] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:23.469 [2024-12-10 12:12:30.156518] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:23.469 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:23.469 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 537d539c-2ee5-4a39-ad82-e95553d70849 00:09:23.469 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=537d539c-2ee5-4a39-ad82-e95553d70849 00:09:23.469 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.470 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:23.470 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.470 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.470 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:23.793 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 537d539c-2ee5-4a39-ad82-e95553d70849 -t 2000 00:09:23.793 [ 00:09:23.793 { 00:09:23.793 "name": "537d539c-2ee5-4a39-ad82-e95553d70849", 00:09:23.793 "aliases": [ 00:09:23.793 "lvs/lvol" 00:09:23.793 ], 00:09:23.793 "product_name": "Logical Volume", 00:09:23.793 "block_size": 4096, 00:09:23.793 "num_blocks": 38912, 00:09:23.793 "uuid": "537d539c-2ee5-4a39-ad82-e95553d70849", 00:09:23.793 "assigned_rate_limits": { 00:09:23.793 "rw_ios_per_sec": 0, 00:09:23.793 "rw_mbytes_per_sec": 0, 00:09:23.793 "r_mbytes_per_sec": 0, 00:09:23.793 "w_mbytes_per_sec": 0 00:09:23.793 }, 00:09:23.793 "claimed": false, 00:09:23.793 "zoned": false, 00:09:23.793 "supported_io_types": { 00:09:23.793 "read": true, 00:09:23.793 "write": true, 00:09:23.793 "unmap": true, 00:09:23.793 "flush": false, 00:09:23.793 "reset": true, 00:09:23.793 "nvme_admin": false, 00:09:23.793 "nvme_io": false, 00:09:23.793 "nvme_io_md": false, 00:09:23.793 "write_zeroes": true, 00:09:23.793 "zcopy": false, 00:09:23.793 "get_zone_info": false, 00:09:23.793 "zone_management": false, 00:09:23.793 "zone_append": false, 00:09:23.793 "compare": false, 00:09:23.793 "compare_and_write": false, 00:09:23.793 "abort": false, 00:09:23.793 "seek_hole": true, 00:09:23.793 "seek_data": true, 00:09:23.793 "copy": false, 00:09:23.793 "nvme_iov_md": false 00:09:23.793 }, 00:09:23.793 "driver_specific": { 00:09:23.793 "lvol": { 00:09:23.793 "lvol_store_uuid": "bea5c451-2ac6-49d1-8825-b6bbfcf4ac62", 00:09:23.793 "base_bdev": "aio_bdev", 00:09:23.793 "thin_provision": false, 00:09:23.793 "num_allocated_clusters": 38, 00:09:23.793 "snapshot": false, 00:09:23.793 "clone": false, 00:09:23.793 "esnap_clone": false 00:09:23.793 } 00:09:23.793 } 00:09:23.793 } 00:09:23.793 ] 00:09:24.052 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:24.052 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:24.052 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:24.052 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:24.052 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:24.052 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:24.310 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:24.310 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:24.310 [2024-12-10 12:12:31.096725] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:24.310 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:24.568 request: 00:09:24.568 { 00:09:24.568 "uuid": "bea5c451-2ac6-49d1-8825-b6bbfcf4ac62", 00:09:24.568 "method": "bdev_lvol_get_lvstores", 00:09:24.568 "req_id": 1 00:09:24.568 } 00:09:24.568 Got JSON-RPC error response 00:09:24.568 response: 00:09:24.568 { 00:09:24.568 "code": -19, 00:09:24.568 "message": "No such device" 00:09:24.568 } 00:09:24.568 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:24.568 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:24.568 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:24.568 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:24.568 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.826 aio_bdev 00:09:24.826 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 537d539c-2ee5-4a39-ad82-e95553d70849 00:09:24.826 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=537d539c-2ee5-4a39-ad82-e95553d70849 00:09:24.826 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.826 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:24.826 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.826 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.826 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.084 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 537d539c-2ee5-4a39-ad82-e95553d70849 -t 2000 00:09:25.084 [ 00:09:25.084 { 00:09:25.084 "name": "537d539c-2ee5-4a39-ad82-e95553d70849", 00:09:25.084 "aliases": [ 00:09:25.084 "lvs/lvol" 00:09:25.084 ], 00:09:25.084 "product_name": "Logical Volume", 00:09:25.084 "block_size": 4096, 00:09:25.084 "num_blocks": 38912, 00:09:25.084 "uuid": "537d539c-2ee5-4a39-ad82-e95553d70849", 00:09:25.084 "assigned_rate_limits": { 00:09:25.084 "rw_ios_per_sec": 0, 00:09:25.084 "rw_mbytes_per_sec": 0, 00:09:25.084 "r_mbytes_per_sec": 0, 00:09:25.084 "w_mbytes_per_sec": 0 00:09:25.084 }, 00:09:25.084 "claimed": false, 00:09:25.084 "zoned": false, 00:09:25.084 "supported_io_types": { 00:09:25.084 "read": true, 00:09:25.084 "write": true, 00:09:25.084 "unmap": true, 00:09:25.084 "flush": false, 00:09:25.084 "reset": true, 00:09:25.084 "nvme_admin": false, 00:09:25.084 "nvme_io": false, 00:09:25.084 "nvme_io_md": false, 00:09:25.084 "write_zeroes": true, 00:09:25.084 "zcopy": false, 00:09:25.084 "get_zone_info": false, 00:09:25.084 "zone_management": false, 00:09:25.084 "zone_append": false, 00:09:25.084 "compare": false, 00:09:25.084 "compare_and_write": false, 00:09:25.084 "abort": false, 00:09:25.084 "seek_hole": true, 00:09:25.084 "seek_data": true, 00:09:25.084 "copy": false, 00:09:25.084 "nvme_iov_md": false 00:09:25.084 }, 00:09:25.084 "driver_specific": { 00:09:25.084 "lvol": { 00:09:25.084 "lvol_store_uuid": "bea5c451-2ac6-49d1-8825-b6bbfcf4ac62", 00:09:25.084 "base_bdev": "aio_bdev", 00:09:25.084 "thin_provision": false, 00:09:25.084 "num_allocated_clusters": 38, 00:09:25.084 "snapshot": false, 00:09:25.084 "clone": false, 00:09:25.084 "esnap_clone": false 00:09:25.084 } 00:09:25.084 } 00:09:25.084 } 00:09:25.084 ] 00:09:25.084 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:25.084 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:25.084 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:25.342 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:25.342 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:25.342 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:25.600 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:25.600 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 537d539c-2ee5-4a39-ad82-e95553d70849 00:09:25.859 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bea5c451-2ac6-49d1-8825-b6bbfcf4ac62 00:09:25.859 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.118 00:09:26.118 real 0m18.940s 00:09:26.118 user 0m48.587s 00:09:26.118 sys 0m3.849s 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.118 ************************************ 00:09:26.118 END TEST lvs_grow_dirty 00:09:26.118 ************************************ 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:26.118 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:26.118 nvmf_trace.0 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.376 rmmod nvme_tcp 00:09:26.376 rmmod nvme_fabrics 00:09:26.376 rmmod nvme_keyring 00:09:26.376 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3507832 ']' 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3507832 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3507832 ']' 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3507832 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3507832 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3507832' 00:09:26.376 killing process with pid 3507832 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3507832 00:09:26.376 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3507832 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.751 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.653 00:09:29.653 real 0m45.968s 00:09:29.653 user 1m11.968s 00:09:29.653 sys 0m10.145s 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:29.653 ************************************ 00:09:29.653 END TEST nvmf_lvs_grow 00:09:29.653 ************************************ 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.653 ************************************ 00:09:29.653 START TEST nvmf_bdev_io_wait 00:09:29.653 ************************************ 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:29.653 * Looking for test storage... 00:09:29.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.653 --rc genhtml_branch_coverage=1 00:09:29.653 --rc genhtml_function_coverage=1 00:09:29.653 --rc genhtml_legend=1 00:09:29.653 --rc geninfo_all_blocks=1 00:09:29.653 --rc geninfo_unexecuted_blocks=1 00:09:29.653 00:09:29.653 ' 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.653 --rc genhtml_branch_coverage=1 00:09:29.653 --rc genhtml_function_coverage=1 00:09:29.653 --rc genhtml_legend=1 00:09:29.653 --rc geninfo_all_blocks=1 00:09:29.653 --rc geninfo_unexecuted_blocks=1 00:09:29.653 00:09:29.653 ' 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.653 --rc genhtml_branch_coverage=1 00:09:29.653 --rc genhtml_function_coverage=1 00:09:29.653 --rc genhtml_legend=1 00:09:29.653 --rc geninfo_all_blocks=1 00:09:29.653 --rc geninfo_unexecuted_blocks=1 00:09:29.653 00:09:29.653 ' 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.653 --rc genhtml_branch_coverage=1 00:09:29.653 --rc genhtml_function_coverage=1 00:09:29.653 --rc genhtml_legend=1 00:09:29.653 --rc geninfo_all_blocks=1 00:09:29.653 --rc geninfo_unexecuted_blocks=1 00:09:29.653 00:09:29.653 ' 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:29.653 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.912 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.912 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.912 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.912 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.912 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.912 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.912 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.912 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.913 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:35.214 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:35.214 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:35.214 Found net devices under 0000:af:00.0: cvl_0_0 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:35.214 Found net devices under 0000:af:00.1: cvl_0_1 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.214 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:35.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:09:35.215 00:09:35.215 --- 10.0.0.2 ping statistics --- 00:09:35.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.215 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:35.215 00:09:35.215 --- 10.0.0.1 ping statistics --- 00:09:35.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.215 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3512138 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3512138 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3512138 ']' 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.215 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.215 [2024-12-10 12:12:41.764454] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:35.215 [2024-12-10 12:12:41.764554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.215 [2024-12-10 12:12:41.880277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.215 [2024-12-10 12:12:41.988273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.215 [2024-12-10 12:12:41.988322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.215 [2024-12-10 12:12:41.988332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.215 [2024-12-10 12:12:41.988342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.215 [2024-12-10 12:12:41.988350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.215 [2024-12-10 12:12:41.990836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.215 [2024-12-10 12:12:41.990913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.215 [2024-12-10 12:12:41.990975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.215 [2024-12-10 12:12:41.990984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.783 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.783 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:35.783 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.783 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.783 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.042 [2024-12-10 12:12:42.842381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.042 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.302 Malloc0 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.302 [2024-12-10 12:12:42.943358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3512383 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3512385 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3512386 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3512388 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.302 { 00:09:36.302 "params": { 00:09:36.302 "name": "Nvme$subsystem", 00:09:36.302 "trtype": "$TEST_TRANSPORT", 00:09:36.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.302 "adrfam": "ipv4", 00:09:36.302 "trsvcid": "$NVMF_PORT", 00:09:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.302 "hdgst": ${hdgst:-false}, 00:09:36.302 "ddgst": ${ddgst:-false} 00:09:36.302 }, 00:09:36.302 "method": "bdev_nvme_attach_controller" 00:09:36.302 } 00:09:36.302 EOF 00:09:36.302 )") 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.302 { 00:09:36.302 "params": { 00:09:36.302 "name": "Nvme$subsystem", 00:09:36.302 "trtype": "$TEST_TRANSPORT", 00:09:36.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.302 "adrfam": "ipv4", 00:09:36.302 "trsvcid": "$NVMF_PORT", 00:09:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.302 "hdgst": ${hdgst:-false}, 00:09:36.302 "ddgst": ${ddgst:-false} 00:09:36.302 }, 00:09:36.302 "method": "bdev_nvme_attach_controller" 00:09:36.302 } 00:09:36.302 EOF 00:09:36.302 )") 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.302 { 00:09:36.302 "params": { 00:09:36.302 "name": "Nvme$subsystem", 00:09:36.302 "trtype": "$TEST_TRANSPORT", 00:09:36.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.302 "adrfam": "ipv4", 00:09:36.302 "trsvcid": "$NVMF_PORT", 00:09:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.302 "hdgst": ${hdgst:-false}, 00:09:36.302 "ddgst": ${ddgst:-false} 00:09:36.302 }, 00:09:36.302 "method": "bdev_nvme_attach_controller" 00:09:36.302 } 00:09:36.302 EOF 00:09:36.302 )") 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.302 { 00:09:36.302 "params": { 00:09:36.302 "name": "Nvme$subsystem", 00:09:36.302 "trtype": "$TEST_TRANSPORT", 00:09:36.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.302 "adrfam": "ipv4", 00:09:36.302 "trsvcid": "$NVMF_PORT", 00:09:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.302 "hdgst": ${hdgst:-false}, 00:09:36.302 "ddgst": ${ddgst:-false} 00:09:36.302 }, 00:09:36.302 "method": "bdev_nvme_attach_controller" 00:09:36.302 } 00:09:36.302 EOF 00:09:36.302 )") 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3512383 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.302 "params": { 00:09:36.302 "name": "Nvme1", 00:09:36.302 "trtype": "tcp", 00:09:36.302 "traddr": "10.0.0.2", 00:09:36.302 "adrfam": "ipv4", 00:09:36.302 "trsvcid": "4420", 00:09:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:36.302 "hdgst": false, 00:09:36.302 "ddgst": false 00:09:36.302 }, 00:09:36.302 "method": "bdev_nvme_attach_controller" 00:09:36.302 }' 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.302 "params": { 00:09:36.302 "name": "Nvme1", 00:09:36.302 "trtype": "tcp", 00:09:36.302 "traddr": "10.0.0.2", 00:09:36.302 "adrfam": "ipv4", 00:09:36.302 "trsvcid": "4420", 00:09:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:36.302 "hdgst": false, 00:09:36.302 "ddgst": false 00:09:36.302 }, 00:09:36.302 "method": "bdev_nvme_attach_controller" 00:09:36.302 }' 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.302 "params": { 00:09:36.302 "name": "Nvme1", 00:09:36.302 "trtype": "tcp", 00:09:36.302 "traddr": "10.0.0.2", 00:09:36.302 "adrfam": "ipv4", 00:09:36.302 "trsvcid": "4420", 00:09:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:36.302 "hdgst": false, 00:09:36.302 "ddgst": false 00:09:36.302 }, 00:09:36.302 "method": "bdev_nvme_attach_controller" 00:09:36.302 }' 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:36.302 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.302 "params": { 00:09:36.302 "name": "Nvme1", 00:09:36.302 "trtype": "tcp", 00:09:36.302 "traddr": "10.0.0.2", 00:09:36.303 "adrfam": "ipv4", 00:09:36.303 "trsvcid": "4420", 00:09:36.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:36.303 "hdgst": false, 00:09:36.303 "ddgst": false 00:09:36.303 }, 00:09:36.303 "method": "bdev_nvme_attach_controller" 00:09:36.303 }' 00:09:36.303 [2024-12-10 12:12:43.023912] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:36.303 [2024-12-10 12:12:43.024020] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:36.303 [2024-12-10 12:12:43.025105] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:36.303 [2024-12-10 12:12:43.025199] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:36.303 [2024-12-10 12:12:43.025404] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:36.303 [2024-12-10 12:12:43.025478] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:36.303 [2024-12-10 12:12:43.025480] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:36.303 [2024-12-10 12:12:43.025551] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:36.562 [2024-12-10 12:12:43.256494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.562 [2024-12-10 12:12:43.353531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.562 [2024-12-10 12:12:43.366233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:36.821 [2024-12-10 12:12:43.446212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.821 [2024-12-10 12:12:43.453955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:36.821 [2024-12-10 12:12:43.513386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.821 [2024-12-10 12:12:43.555785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:36.821 [2024-12-10 12:12:43.623052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:37.079 Running I/O for 1 seconds... 00:09:37.079 Running I/O for 1 seconds... 00:09:37.339 Running I/O for 1 seconds... 00:09:37.598 Running I/O for 1 seconds... 00:09:38.167 12061.00 IOPS, 47.11 MiB/s 00:09:38.167 Latency(us) 00:09:38.167 [2024-12-10T11:12:44.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.167 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:38.167 Nvme1n1 : 1.01 12123.48 47.36 0.00 0.00 10523.71 4868.39 15603.81 00:09:38.167 [2024-12-10T11:12:44.993Z] =================================================================================================================== 00:09:38.167 [2024-12-10T11:12:44.993Z] Total : 12123.48 47.36 0.00 0.00 10523.71 4868.39 15603.81 00:09:38.167 9055.00 IOPS, 35.37 MiB/s 00:09:38.167 Latency(us) 00:09:38.167 [2024-12-10T11:12:44.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.167 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:38.167 Nvme1n1 : 1.01 9121.98 35.63 0.00 0.00 13977.06 5929.45 23343.30 00:09:38.167 [2024-12-10T11:12:44.993Z] =================================================================================================================== 00:09:38.167 [2024-12-10T11:12:44.993Z] Total : 9121.98 35.63 0.00 0.00 13977.06 5929.45 23343.30 00:09:38.426 214168.00 IOPS, 836.59 MiB/s 00:09:38.426 Latency(us) 00:09:38.426 [2024-12-10T11:12:45.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.426 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:38.426 Nvme1n1 : 1.00 213812.31 835.20 0.00 0.00 595.64 267.22 1646.20 00:09:38.426 [2024-12-10T11:12:45.252Z] =================================================================================================================== 00:09:38.426 [2024-12-10T11:12:45.252Z] Total : 213812.31 835.20 0.00 0.00 595.64 267.22 1646.20 00:09:38.426 9375.00 IOPS, 36.62 MiB/s 00:09:38.427 Latency(us) 00:09:38.427 [2024-12-10T11:12:45.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.427 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:38.427 Nvme1n1 : 1.01 9439.58 36.87 0.00 0.00 13511.82 5648.58 22344.66 00:09:38.427 [2024-12-10T11:12:45.253Z] =================================================================================================================== 00:09:38.427 [2024-12-10T11:12:45.253Z] Total : 9439.58 36.87 0.00 0.00 13511.82 5648.58 22344.66 00:09:38.995 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3512385 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3512386 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3512388 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.254 rmmod nvme_tcp 00:09:39.254 rmmod nvme_fabrics 00:09:39.254 rmmod nvme_keyring 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3512138 ']' 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3512138 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3512138 ']' 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3512138 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.254 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3512138 00:09:39.254 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.254 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.254 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3512138' 00:09:39.254 killing process with pid 3512138 00:09:39.254 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3512138 00:09:39.254 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3512138 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.632 12:12:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.538 00:09:42.538 real 0m12.853s 00:09:42.538 user 0m29.347s 00:09:42.538 sys 0m6.134s 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.538 ************************************ 00:09:42.538 END TEST nvmf_bdev_io_wait 00:09:42.538 ************************************ 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.538 ************************************ 00:09:42.538 START TEST nvmf_queue_depth 00:09:42.538 ************************************ 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:42.538 * Looking for test storage... 00:09:42.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.538 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.798 --rc genhtml_branch_coverage=1 00:09:42.798 --rc genhtml_function_coverage=1 00:09:42.798 --rc genhtml_legend=1 00:09:42.798 --rc geninfo_all_blocks=1 00:09:42.798 --rc geninfo_unexecuted_blocks=1 00:09:42.798 00:09:42.798 ' 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.798 --rc genhtml_branch_coverage=1 00:09:42.798 --rc genhtml_function_coverage=1 00:09:42.798 --rc genhtml_legend=1 00:09:42.798 --rc geninfo_all_blocks=1 00:09:42.798 --rc geninfo_unexecuted_blocks=1 00:09:42.798 00:09:42.798 ' 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.798 --rc genhtml_branch_coverage=1 00:09:42.798 --rc genhtml_function_coverage=1 00:09:42.798 --rc genhtml_legend=1 00:09:42.798 --rc geninfo_all_blocks=1 00:09:42.798 --rc geninfo_unexecuted_blocks=1 00:09:42.798 00:09:42.798 ' 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.798 --rc genhtml_branch_coverage=1 00:09:42.798 --rc genhtml_function_coverage=1 00:09:42.798 --rc genhtml_legend=1 00:09:42.798 --rc geninfo_all_blocks=1 00:09:42.798 --rc geninfo_unexecuted_blocks=1 00:09:42.798 00:09:42.798 ' 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.798 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.799 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:49.367 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:49.367 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:49.367 Found net devices under 0000:af:00.0: cvl_0_0 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:49.367 Found net devices under 0000:af:00.1: cvl_0_1 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.367 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.367 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.367 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.367 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.367 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.367 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.367 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.367 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.367 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:09:49.368 00:09:49.368 --- 10.0.0.2 ping statistics --- 00:09:49.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.368 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:09:49.368 00:09:49.368 --- 10.0.0.1 ping statistics --- 00:09:49.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.368 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3516551 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3516551 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3516551 ']' 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.368 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.368 [2024-12-10 12:12:55.373712] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:49.368 [2024-12-10 12:12:55.373815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.368 [2024-12-10 12:12:55.493060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.368 [2024-12-10 12:12:55.598409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.368 [2024-12-10 12:12:55.598449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.368 [2024-12-10 12:12:55.598460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.368 [2024-12-10 12:12:55.598470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.368 [2024-12-10 12:12:55.598478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.368 [2024-12-10 12:12:55.599736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.368 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.368 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:49.368 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.368 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.368 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.627 [2024-12-10 12:12:56.210856] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.627 Malloc0 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.627 [2024-12-10 12:12:56.325200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3516713 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3516713 /var/tmp/bdevperf.sock 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3516713 ']' 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:49.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.627 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.627 [2024-12-10 12:12:56.401654] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:49.627 [2024-12-10 12:12:56.401746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3516713 ] 00:09:49.890 [2024-12-10 12:12:56.514429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.890 [2024-12-10 12:12:56.621540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.578 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.578 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:50.578 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:50.578 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.578 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.578 NVMe0n1 00:09:50.578 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.578 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.578 Running I/O for 10 seconds... 00:09:52.893 10240.00 IOPS, 40.00 MiB/s [2024-12-10T11:13:00.655Z] 10578.50 IOPS, 41.32 MiB/s [2024-12-10T11:13:01.593Z] 10574.67 IOPS, 41.31 MiB/s [2024-12-10T11:13:02.530Z] 10696.50 IOPS, 41.78 MiB/s [2024-12-10T11:13:03.465Z] 10728.40 IOPS, 41.91 MiB/s [2024-12-10T11:13:04.402Z] 10750.33 IOPS, 41.99 MiB/s [2024-12-10T11:13:05.777Z] 10809.71 IOPS, 42.23 MiB/s [2024-12-10T11:13:06.714Z] 10834.38 IOPS, 42.32 MiB/s [2024-12-10T11:13:07.652Z] 10845.67 IOPS, 42.37 MiB/s [2024-12-10T11:13:07.652Z] 10842.30 IOPS, 42.35 MiB/s 00:10:00.826 Latency(us) 00:10:00.826 [2024-12-10T11:13:07.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.826 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:00.826 Verification LBA range: start 0x0 length 0x4000 00:10:00.826 NVMe0n1 : 10.07 10872.85 42.47 0.00 0.00 93855.40 19598.38 59668.97 00:10:00.826 [2024-12-10T11:13:07.652Z] =================================================================================================================== 00:10:00.826 [2024-12-10T11:13:07.652Z] Total : 10872.85 42.47 0.00 0.00 93855.40 19598.38 59668.97 00:10:00.826 { 00:10:00.826 "results": [ 00:10:00.826 { 00:10:00.826 "job": "NVMe0n1", 00:10:00.826 "core_mask": "0x1", 00:10:00.826 "workload": "verify", 00:10:00.826 "status": "finished", 00:10:00.826 "verify_range": { 00:10:00.826 "start": 0, 00:10:00.826 "length": 16384 00:10:00.826 }, 00:10:00.826 "queue_depth": 1024, 00:10:00.826 "io_size": 4096, 00:10:00.826 "runtime": 10.065434, 00:10:00.826 "iops": 10872.854563449524, 00:10:00.826 "mibps": 42.4720881384747, 00:10:00.826 "io_failed": 0, 00:10:00.826 "io_timeout": 0, 00:10:00.826 "avg_latency_us": 93855.40238150932, 00:10:00.826 "min_latency_us": 19598.384761904763, 00:10:00.826 "max_latency_us": 59668.96761904762 00:10:00.826 } 00:10:00.826 ], 00:10:00.826 "core_count": 1 00:10:00.826 } 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3516713 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3516713 ']' 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3516713 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3516713 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3516713' 00:10:00.826 killing process with pid 3516713 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3516713 00:10:00.826 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.826 00:10:00.826 Latency(us) 00:10:00.826 [2024-12-10T11:13:07.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.826 [2024-12-10T11:13:07.652Z] =================================================================================================================== 00:10:00.826 [2024-12-10T11:13:07.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.826 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3516713 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.764 rmmod nvme_tcp 00:10:01.764 rmmod nvme_fabrics 00:10:01.764 rmmod nvme_keyring 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3516551 ']' 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3516551 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3516551 ']' 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3516551 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3516551 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3516551' 00:10:01.764 killing process with pid 3516551 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3516551 00:10:01.764 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3516551 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.143 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.677 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.677 00:10:05.677 real 0m22.695s 00:10:05.677 user 0m27.554s 00:10:05.677 sys 0m6.131s 00:10:05.677 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.677 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.677 ************************************ 00:10:05.677 END TEST nvmf_queue_depth 00:10:05.677 ************************************ 00:10:05.677 12:13:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:05.677 12:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.677 12:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.677 12:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.677 ************************************ 00:10:05.677 START TEST nvmf_target_multipath 00:10:05.677 ************************************ 00:10:05.677 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:05.677 * Looking for test storage... 00:10:05.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:05.677 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.678 --rc genhtml_branch_coverage=1 00:10:05.678 --rc genhtml_function_coverage=1 00:10:05.678 --rc genhtml_legend=1 00:10:05.678 --rc geninfo_all_blocks=1 00:10:05.678 --rc geninfo_unexecuted_blocks=1 00:10:05.678 00:10:05.678 ' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.678 --rc genhtml_branch_coverage=1 00:10:05.678 --rc genhtml_function_coverage=1 00:10:05.678 --rc genhtml_legend=1 00:10:05.678 --rc geninfo_all_blocks=1 00:10:05.678 --rc geninfo_unexecuted_blocks=1 00:10:05.678 00:10:05.678 ' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.678 --rc genhtml_branch_coverage=1 00:10:05.678 --rc genhtml_function_coverage=1 00:10:05.678 --rc genhtml_legend=1 00:10:05.678 --rc geninfo_all_blocks=1 00:10:05.678 --rc geninfo_unexecuted_blocks=1 00:10:05.678 00:10:05.678 ' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.678 --rc genhtml_branch_coverage=1 00:10:05.678 --rc genhtml_function_coverage=1 00:10:05.678 --rc genhtml_legend=1 00:10:05.678 --rc geninfo_all_blocks=1 00:10:05.678 --rc geninfo_unexecuted_blocks=1 00:10:05.678 00:10:05.678 ' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.678 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.952 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:10.953 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:10.953 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:10.953 Found net devices under 0000:af:00.0: cvl_0_0 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:10.953 Found net devices under 0000:af:00.1: cvl_0_1 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:10:10.953 00:10:10.953 --- 10.0.0.2 ping statistics --- 00:10:10.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.953 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:10:10.953 00:10:10.953 --- 10.0.0.1 ping statistics --- 00:10:10.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.953 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:10.953 only one NIC for nvmf test 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.953 rmmod nvme_tcp 00:10:10.953 rmmod nvme_fabrics 00:10:10.953 rmmod nvme_keyring 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.953 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.954 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.954 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.954 12:13:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.870 00:10:12.870 real 0m7.447s 00:10:12.870 user 0m1.444s 00:10:12.870 sys 0m3.881s 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:12.870 ************************************ 00:10:12.870 END TEST nvmf_target_multipath 00:10:12.870 ************************************ 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.870 ************************************ 00:10:12.870 START TEST nvmf_zcopy 00:10:12.870 ************************************ 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:12.870 * Looking for test storage... 00:10:12.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:12.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.870 --rc genhtml_branch_coverage=1 00:10:12.870 --rc genhtml_function_coverage=1 00:10:12.870 --rc genhtml_legend=1 00:10:12.870 --rc geninfo_all_blocks=1 00:10:12.870 --rc geninfo_unexecuted_blocks=1 00:10:12.870 00:10:12.870 ' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:12.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.870 --rc genhtml_branch_coverage=1 00:10:12.870 --rc genhtml_function_coverage=1 00:10:12.870 --rc genhtml_legend=1 00:10:12.870 --rc geninfo_all_blocks=1 00:10:12.870 --rc geninfo_unexecuted_blocks=1 00:10:12.870 00:10:12.870 ' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:12.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.870 --rc genhtml_branch_coverage=1 00:10:12.870 --rc genhtml_function_coverage=1 00:10:12.870 --rc genhtml_legend=1 00:10:12.870 --rc geninfo_all_blocks=1 00:10:12.870 --rc geninfo_unexecuted_blocks=1 00:10:12.870 00:10:12.870 ' 00:10:12.870 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:12.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.870 --rc genhtml_branch_coverage=1 00:10:12.871 --rc genhtml_function_coverage=1 00:10:12.871 --rc genhtml_legend=1 00:10:12.871 --rc geninfo_all_blocks=1 00:10:12.871 --rc geninfo_unexecuted_blocks=1 00:10:12.871 00:10:12.871 ' 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.871 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.129 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.129 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.129 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.130 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.406 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:18.407 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:18.407 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:18.407 Found net devices under 0000:af:00.0: cvl_0_0 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:18.407 Found net devices under 0000:af:00.1: cvl_0_1 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.407 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:10:18.407 00:10:18.407 --- 10.0.0.2 ping statistics --- 00:10:18.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.407 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:10:18.408 00:10:18.408 --- 10.0.0.1 ping statistics --- 00:10:18.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.408 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3525532 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3525532 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3525532 ']' 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.408 [2024-12-10 12:13:24.639702] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:18.408 [2024-12-10 12:13:24.639806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.408 [2024-12-10 12:13:24.756944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.408 [2024-12-10 12:13:24.861686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.408 [2024-12-10 12:13:24.861729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.408 [2024-12-10 12:13:24.861740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.408 [2024-12-10 12:13:24.861766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.408 [2024-12-10 12:13:24.861775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.408 [2024-12-10 12:13:24.863072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.668 [2024-12-10 12:13:25.470202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.668 [2024-12-10 12:13:25.486367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.668 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.927 malloc0 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:18.927 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.928 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.928 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.928 { 00:10:18.928 "params": { 00:10:18.928 "name": "Nvme$subsystem", 00:10:18.928 "trtype": "$TEST_TRANSPORT", 00:10:18.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.928 "adrfam": "ipv4", 00:10:18.928 "trsvcid": "$NVMF_PORT", 00:10:18.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.928 "hdgst": ${hdgst:-false}, 00:10:18.928 "ddgst": ${ddgst:-false} 00:10:18.928 }, 00:10:18.928 "method": "bdev_nvme_attach_controller" 00:10:18.928 } 00:10:18.928 EOF 00:10:18.928 )") 00:10:18.928 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:18.928 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:18.928 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:18.928 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.928 "params": { 00:10:18.928 "name": "Nvme1", 00:10:18.928 "trtype": "tcp", 00:10:18.928 "traddr": "10.0.0.2", 00:10:18.928 "adrfam": "ipv4", 00:10:18.928 "trsvcid": "4420", 00:10:18.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.928 "hdgst": false, 00:10:18.928 "ddgst": false 00:10:18.928 }, 00:10:18.928 "method": "bdev_nvme_attach_controller" 00:10:18.928 }' 00:10:18.928 [2024-12-10 12:13:25.617757] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:18.928 [2024-12-10 12:13:25.617841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525773 ] 00:10:18.928 [2024-12-10 12:13:25.729577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.186 [2024-12-10 12:13:25.838004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.445 Running I/O for 10 seconds... 00:10:21.761 7410.00 IOPS, 57.89 MiB/s [2024-12-10T11:13:29.524Z] 7497.50 IOPS, 58.57 MiB/s [2024-12-10T11:13:30.463Z] 7517.00 IOPS, 58.73 MiB/s [2024-12-10T11:13:31.399Z] 7513.00 IOPS, 58.70 MiB/s [2024-12-10T11:13:32.334Z] 7514.60 IOPS, 58.71 MiB/s [2024-12-10T11:13:33.711Z] 7524.50 IOPS, 58.79 MiB/s [2024-12-10T11:13:34.648Z] 7527.86 IOPS, 58.81 MiB/s [2024-12-10T11:13:35.583Z] 7534.88 IOPS, 58.87 MiB/s [2024-12-10T11:13:36.521Z] 7539.22 IOPS, 58.90 MiB/s [2024-12-10T11:13:36.521Z] 7539.10 IOPS, 58.90 MiB/s 00:10:29.695 Latency(us) 00:10:29.695 [2024-12-10T11:13:36.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.695 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:29.695 Verification LBA range: start 0x0 length 0x1000 00:10:29.695 Nvme1n1 : 10.01 7541.78 58.92 0.00 0.00 16925.04 2543.42 24841.26 00:10:29.695 [2024-12-10T11:13:36.521Z] =================================================================================================================== 00:10:29.695 [2024-12-10T11:13:36.521Z] Total : 7541.78 58.92 0.00 0.00 16925.04 2543.42 24841.26 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3527684 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:30.632 [2024-12-10 12:13:37.182721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.632 [2024-12-10 12:13:37.182757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:30.632 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:30.632 { 00:10:30.632 "params": { 00:10:30.632 "name": "Nvme$subsystem", 00:10:30.632 "trtype": "$TEST_TRANSPORT", 00:10:30.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.632 "adrfam": "ipv4", 00:10:30.632 "trsvcid": "$NVMF_PORT", 00:10:30.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.632 "hdgst": ${hdgst:-false}, 00:10:30.632 "ddgst": ${ddgst:-false} 00:10:30.632 }, 00:10:30.633 "method": "bdev_nvme_attach_controller" 00:10:30.633 } 00:10:30.633 EOF 00:10:30.633 )") 00:10:30.633 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:30.633 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:30.633 [2024-12-10 12:13:37.190734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.190759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:30.633 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:30.633 "params": { 00:10:30.633 "name": "Nvme1", 00:10:30.633 "trtype": "tcp", 00:10:30.633 "traddr": "10.0.0.2", 00:10:30.633 "adrfam": "ipv4", 00:10:30.633 "trsvcid": "4420", 00:10:30.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.633 "hdgst": false, 00:10:30.633 "ddgst": false 00:10:30.633 }, 00:10:30.633 "method": "bdev_nvme_attach_controller" 00:10:30.633 }' 00:10:30.633 [2024-12-10 12:13:37.198718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.198741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.206750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.206771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.214765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.214786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.222771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.222792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.230809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.230829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.238825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.238845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.241088] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:30.633 [2024-12-10 12:13:37.241160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527684 ] 00:10:30.633 [2024-12-10 12:13:37.246830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.246850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.254873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.254893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.262880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.262902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.270913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.270933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.278927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.278947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.286953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.286973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.294969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.294988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.302995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.303018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.311009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.311029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.319038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.319057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.327045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.327064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.335078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.335096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.343099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.343118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.351112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.351131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.354990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.633 [2024-12-10 12:13:37.359143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.359171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.367175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.367211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.375186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.375220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.383247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.383266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.391251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.391270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.399280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.399299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.407279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.407298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.415304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.415323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.423322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.423340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.431345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.431363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.439354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.439373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.447388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.447407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 [2024-12-10 12:13:37.455398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-10 12:13:37.455416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.463430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.463449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.467540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.893 [2024-12-10 12:13:37.471470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.471488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.479502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.479520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.487512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.487531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.495554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.495572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.503536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.503554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.511569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.511587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.519589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.519607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.527612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-10 12:13:37.527630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 [2024-12-10 12:13:37.535639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.908 [2024-12-10 12:13:37.535657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.908 [2024-12-10 12:13:37.543646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.908 [2024-12-10 12:13:37.543664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.908 [2024-12-10 12:13:37.551685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.908 [2024-12-10 12:13:37.551704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.908 [2024-12-10 12:13:37.559704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.908 [2024-12-10 12:13:37.559724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.908 [2024-12-10 12:13:37.567727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.908 [2024-12-10 12:13:37.567748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.908 [2024-12-10 12:13:37.575761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.908 [2024-12-10 12:13:37.575780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.908 [2024-12-10 12:13:37.583753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.908 [2024-12-10 12:13:37.583771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.908 [2024-12-10 12:13:37.591790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.591809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.599806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.599828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.607817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.607835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.615854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.615872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.623890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.623908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.631884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.631902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.639918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.639936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.647928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.647947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.655965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.655983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.663981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.664000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.672006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.672024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.680026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.680044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.688049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.688067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.696061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.696080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.704101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.704120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-12-10 12:13:37.712103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-12-10 12:13:37.712123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.168 [2024-12-10 12:13:37.720134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.168 [2024-12-10 12:13:37.720152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.168 [2024-12-10 12:13:37.728152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.168 [2024-12-10 12:13:37.728176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.168 [2024-12-10 12:13:37.736164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.168 [2024-12-10 12:13:37.736188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.168 [2024-12-10 12:13:37.744205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.168 [2024-12-10 12:13:37.744223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.168 [2024-12-10 12:13:37.752224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.168 [2024-12-10 12:13:37.752242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.168 [2024-12-10 12:13:37.760234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.760252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.768283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.768301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.776270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.776288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.784310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.784328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.792329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.792347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.800343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.800362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.808372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.808390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.816411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.816433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.824426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.824447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.832450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.832471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.840464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.840484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.848499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.848518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.856525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.856544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.864529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.864549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.872565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.872584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.880584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.880602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.888598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.888616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.896634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.896654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.904640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.904659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.912686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.912706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.920711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.920730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.928710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.928729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.936743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.936762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.944767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.944786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.952808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.952826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.960814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.960832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.968825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.968843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.976859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.976877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-12-10 12:13:37.984881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-12-10 12:13:37.984899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.030432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.030457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.037038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.037059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.045053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.045073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 Running I/O for 5 seconds... 00:10:31.428 [2024-12-10 12:13:38.057203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.057227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.065840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.065864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.077548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.077572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.087345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.087369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.096851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.096878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.104442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.104466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.115673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.115695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.125445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.125468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.134797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.134820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.142327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.142358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.153230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.153254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.162098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.162122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.170734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.170758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.179296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.179319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.187857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.187881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.196669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.196694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.205430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.205454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.214464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.214488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.223264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.223288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.231958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.231981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.240635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.240658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-12-10 12:13:38.249557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-12-10 12:13:38.249581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-12-10 12:13:38.258351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-12-10 12:13:38.258374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-12-10 12:13:38.267262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-12-10 12:13:38.267290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-12-10 12:13:38.277494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-12-10 12:13:38.277517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-12-10 12:13:38.286045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-12-10 12:13:38.286069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-12-10 12:13:38.297123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-12-10 12:13:38.297147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-12-10 12:13:38.305126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-12-10 12:13:38.305149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.315939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.315962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.325626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.325650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.333140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.333164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.344315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.344339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.352631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.352657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.363015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.363040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.370937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.370961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.381956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.381980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.391717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.391741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.399347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.399370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.410828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.410852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.419423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.419447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.429988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.430011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.438159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.438189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.449236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.449264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.459057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.459080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.468729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.468753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.476461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.476484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.487582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.487606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.496433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.496456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-12-10 12:13:38.505053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-12-10 12:13:38.505077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.513897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.513921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.522548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.522573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.531175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.531198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.541073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.541103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.549626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.549649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.560148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.560179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.569834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.569857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.577469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.577491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.588590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.588613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.598195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.598219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.605753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.605775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.617122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.617145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.626948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.626977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.635585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.635608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.645707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.645730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.653887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.653911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.664711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.664736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.674541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.674564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.682281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.682305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.693992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-12-10 12:13:38.694016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-12-10 12:13:38.702737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.948 [2024-12-10 12:13:38.702760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.948 [2024-12-10 12:13:38.711520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.948 [2024-12-10 12:13:38.711543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.948 [2024-12-10 12:13:38.720376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.948 [2024-12-10 12:13:38.720398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.948 [2024-12-10 12:13:38.729102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.948 [2024-12-10 12:13:38.729126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.948 [2024-12-10 12:13:38.737627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.948 [2024-12-10 12:13:38.737650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.948 [2024-12-10 12:13:38.746239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.948 [2024-12-10 12:13:38.746263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.948 [2024-12-10 12:13:38.754799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.948 [2024-12-10 12:13:38.754822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.948 [2024-12-10 12:13:38.763636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.948 [2024-12-10 12:13:38.763659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.948 [2024-12-10 12:13:38.772582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.772606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.781284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.781308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.790463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.790486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.799542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.799570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.808639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.808663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.817519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.817542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.826592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.826615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.835355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.835378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.844263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.844286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.853069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.853093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.863316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-12-10 12:13:38.863338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-12-10 12:13:38.872064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.872086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.881333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.881356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.890236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.890260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.898705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.898728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.907481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.907503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.916294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.916317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.924979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.925003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.933637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.933661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.942276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.942300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.951465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.951491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.960596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.960619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.969555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.969578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.978706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.978729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.987703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.987726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:38.996348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:38.996371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:39.005176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:39.005200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:39.014115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:39.014138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:39.022984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:39.023007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.208 [2024-12-10 12:13:39.032283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.208 [2024-12-10 12:13:39.032306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.040899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.040921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.049765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.049788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 14353.00 IOPS, 112.13 MiB/s [2024-12-10T11:13:39.293Z] [2024-12-10 12:13:39.059101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.059129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.067950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.067973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.076904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.076928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.085865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.085888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.094734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.094758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.103816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.103839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.112497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.112520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.121334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.121356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.130446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.130469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.139522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.139544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.148235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.148257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.157241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.157264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.166325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.166364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.175291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.175314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.184163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.184195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.193056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.193079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.202010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.202034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.210821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.210844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.219632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.219655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.228406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.228429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.237383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.237408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.246110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.246135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.254925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.254949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.263651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.263675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.272460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.272484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.281056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.281079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.467 [2024-12-10 12:13:39.289846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.467 [2024-12-10 12:13:39.289869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.298494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.298521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.307460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.307483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.316560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.316583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.325600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.325624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.334270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.334293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.343267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.343290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.352036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.352059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.361050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.361073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.369686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.369708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.378388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.378412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.387335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.387358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.396105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.396129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.404944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.404967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.413999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.414023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.422882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.422905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.431523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.431546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.440707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.440731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.449540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.449563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.458619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.458642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.467455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.467483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.476332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.476354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.485246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.485268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.493987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.494010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.502808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.502832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.511892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.511915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.521171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.521194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.530264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.530287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.539090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.539114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.727 [2024-12-10 12:13:39.549030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.727 [2024-12-10 12:13:39.549054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.986 [2024-12-10 12:13:39.557582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.986 [2024-12-10 12:13:39.557605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.986 [2024-12-10 12:13:39.567923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.986 [2024-12-10 12:13:39.567947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.986 [2024-12-10 12:13:39.577731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.986 [2024-12-10 12:13:39.577755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.986 [2024-12-10 12:13:39.585330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.986 [2024-12-10 12:13:39.585353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.986 [2024-12-10 12:13:39.596650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.986 [2024-12-10 12:13:39.596673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.606546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.606569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.613926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.613949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.624904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.624927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.634606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.634629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.642211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.642238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.653361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.653384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.663686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.663710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.673630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.673653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.683224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.683247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.690740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.690764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.701748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.701772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.711444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.711467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.719037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.719061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.730453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.730477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.739081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.739105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.749596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.749619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.759517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.759542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.768939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.768961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.776637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.776659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.788081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.788105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.796324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.796347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.987 [2024-12-10 12:13:39.807853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.987 [2024-12-10 12:13:39.807877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.816470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.816494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.826517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.826545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.835020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.835043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.845333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.845356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.853712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.853736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.864308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.864331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.872341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.872363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.884324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.884346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.896261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.896284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.906205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.906229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.915507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.915531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.925172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.925196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.932871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.932895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.944100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.944123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.954154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.954184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.963710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.963736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.971796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.971821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.982345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.982368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.992219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.992243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:39.999720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:39.999742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:40.011505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:40.011531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:40.021636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:40.021662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:40.030941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:40.030966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:40.039844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:40.039869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:40.050293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:40.050318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 14344.00 IOPS, 112.06 MiB/s [2024-12-10T11:13:40.072Z] [2024-12-10 12:13:40.061020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:40.061045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.246 [2024-12-10 12:13:40.068936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.246 [2024-12-10 12:13:40.068960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.080507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.080531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.089280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.089304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.101220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.101245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.109805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.109829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.120478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.120502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.129018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.129041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.140124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.140148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.148702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.148725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.160873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.160896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.172731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.172755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.181332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.181355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.191627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.191651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.200455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.200479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.210661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.210685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.218314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.218337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.229232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.229256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.237765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.237787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.246919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.246941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.255881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.255904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.264539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.264563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.273509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.273533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.282482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.282505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.291824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.291846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.300879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.300902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.310042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.310066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.319219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.319242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.506 [2024-12-10 12:13:40.328224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.506 [2024-12-10 12:13:40.328247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.765 [2024-12-10 12:13:40.337233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.765 [2024-12-10 12:13:40.337256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.765 [2024-12-10 12:13:40.346115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.346139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.355115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.355137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.363981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.364010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.372965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.372988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.381663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.381686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.390484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.390508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.399434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.399457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.408463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.408486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.417435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.417458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.426240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.426263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.435046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.435069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.443729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.443752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.452404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.452427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.461440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.461463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.470699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.470723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.479481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.479504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.488489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.488512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.497519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.497542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.506433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.506457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.515584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.515607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.524730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.524754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.533668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.533696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.542507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.542531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.551210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.551233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.560226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.560249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.569063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.569087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.577912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.577936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.766 [2024-12-10 12:13:40.586874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.766 [2024-12-10 12:13:40.586898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.595839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.595862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.604832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.604855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.613891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.613914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.622839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.622863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.631601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.631624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.640383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.640406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.649256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.649278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.657906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.657929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.666702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.666725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.675763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.675787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.684947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.684971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.693720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.693744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.702448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.702476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.712438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.712461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.720687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.720710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.731473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.731496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.740128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.740151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.749299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.749322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.758355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.758379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.768617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.768640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.025 [2024-12-10 12:13:40.776911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.025 [2024-12-10 12:13:40.776934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.026 [2024-12-10 12:13:40.787030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.026 [2024-12-10 12:13:40.787054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.026 [2024-12-10 12:13:40.796712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.026 [2024-12-10 12:13:40.796736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.026 [2024-12-10 12:13:40.804198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.026 [2024-12-10 12:13:40.804220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.026 [2024-12-10 12:13:40.815836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.026 [2024-12-10 12:13:40.815860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.026 [2024-12-10 12:13:40.825841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.026 [2024-12-10 12:13:40.825864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.026 [2024-12-10 12:13:40.833468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.026 [2024-12-10 12:13:40.833490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.026 [2024-12-10 12:13:40.845031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.026 [2024-12-10 12:13:40.845054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.853512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.853535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.863694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.863717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.873658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.873682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.881413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.881440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.892460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.892483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.900438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.900461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.911820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.911844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.920314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.920338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.930613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.930635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.940517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.940540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.948225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.948248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.959401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.959424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.969377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.969400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.978878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.978901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.986545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.986568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:40.997912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:40.997937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.008400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.008424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.018014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.018037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.027628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.027651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.035885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.035908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.046494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.046517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.054660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.054683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 14331.00 IOPS, 111.96 MiB/s [2024-12-10T11:13:41.111Z] [2024-12-10 12:13:41.065251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.065274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.073384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.073406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.084449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.084473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.093047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.093071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.285 [2024-12-10 12:13:41.103302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.285 [2024-12-10 12:13:41.103327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.113050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.113075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.120587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.120610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.132061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.132085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.140713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.140736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.151063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.151088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.159036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.159059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.170228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.170252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.179977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.180001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.187720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.187745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.198600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.198624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.206706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.206730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.218589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.218612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.227086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.227110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.238704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.238727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.248490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.248513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.256190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.256214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.267556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.267580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.275838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.275863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.286415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.286439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.294644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.294667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.305189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.305212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.315002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.315026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.324588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.324611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.332329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.332353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.343468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.343493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.351739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.351763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.545 [2024-12-10 12:13:41.362270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.545 [2024-12-10 12:13:41.362293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.372136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.372159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.381591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.381614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.389314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.389337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.400588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.400612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.410453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.410477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.418010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.418033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.429628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.429652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.438489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.438513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.447396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.447419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.456365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.456388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.465270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.465294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.474350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.474373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.483036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.483059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.491926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.491948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.500709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.500731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.509785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.509808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.518724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.518747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.527430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.527453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.537659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.537682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.547487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.547511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.555835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.555858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.566633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.566661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.576688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.576711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.584187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.584225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.595558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.595581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.604283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.604305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.613342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.613367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.805 [2024-12-10 12:13:41.622064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.805 [2024-12-10 12:13:41.622088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.630826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.630849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.649096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.649121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.657984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.658008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.666727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.666750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.675762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.675785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.685021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.685044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.694680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.694703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.706722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.706745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.715036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.715060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.725728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.725752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.735588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.735612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.743427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.743450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.754297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.754320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.762501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.762524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.772950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.772974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.781250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.781278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.792397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.792420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.802419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.802442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.810004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.810026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.065 [2024-12-10 12:13:41.821386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.065 [2024-12-10 12:13:41.821420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.066 [2024-12-10 12:13:41.831241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.066 [2024-12-10 12:13:41.831264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.066 [2024-12-10 12:13:41.838938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.066 [2024-12-10 12:13:41.838960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.066 [2024-12-10 12:13:41.850220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.066 [2024-12-10 12:13:41.850243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.066 [2024-12-10 12:13:41.860056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.066 [2024-12-10 12:13:41.860079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.066 [2024-12-10 12:13:41.868026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.066 [2024-12-10 12:13:41.868049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.066 [2024-12-10 12:13:41.879574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.066 [2024-12-10 12:13:41.879597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.066 [2024-12-10 12:13:41.889737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.066 [2024-12-10 12:13:41.889760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.897545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.897568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.909068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.909091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.917972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.917994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.926715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.926738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.935380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.935402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.944349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.944371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.953345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.953368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.962114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.962141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.970969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.970993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.979683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.979706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.988682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.988705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:41.997469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:41.997492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.006232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.006255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.015174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.015198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.024198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.024223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.033197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.033221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.041886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.041909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.050709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.050732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 14336.50 IOPS, 112.00 MiB/s [2024-12-10T11:13:42.151Z] [2024-12-10 12:13:42.059424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.059447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.068123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.068147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.076591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.325 [2024-12-10 12:13:42.076614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.325 [2024-12-10 12:13:42.085336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.326 [2024-12-10 12:13:42.085359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.326 [2024-12-10 12:13:42.094568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.326 [2024-12-10 12:13:42.094592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.326 [2024-12-10 12:13:42.103546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.326 [2024-12-10 12:13:42.103570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.326 [2024-12-10 12:13:42.112485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.326 [2024-12-10 12:13:42.112510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.326 [2024-12-10 12:13:42.121262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.326 [2024-12-10 12:13:42.121285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.326 [2024-12-10 12:13:42.130244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.326 [2024-12-10 12:13:42.130272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.326 [2024-12-10 12:13:42.138880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.326 [2024-12-10 12:13:42.138903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.326 [2024-12-10 12:13:42.147711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.326 [2024-12-10 12:13:42.147735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.156642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.156665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.165647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.165671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.174632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.174656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.183542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.183565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.192678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.192701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.201648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.201671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.210517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.210540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.219507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.219531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.228270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.228293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.236880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.236903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.245567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.245590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.254327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.254350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.263183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.263222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.272278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.272306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.281150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.281180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.289956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.289979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.298821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.298843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.307668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.307692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.316583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.316607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.325454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.325479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.334197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.334220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.342969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.342992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.351991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.352015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.360861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.360883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.369605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.369628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.378513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.378536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.387260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.387283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.395873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.585 [2024-12-10 12:13:42.395896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.585 [2024-12-10 12:13:42.404760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.586 [2024-12-10 12:13:42.404782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.413893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.413916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.422852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.422876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.432000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.432024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.440758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.440781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.449536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.449561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.458558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.458581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.467369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.467394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.476129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.476153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.485091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.485116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.494024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.494049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.502908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.502932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.511785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.511809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.520870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.845 [2024-12-10 12:13:42.520893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.845 [2024-12-10 12:13:42.529595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.529619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.538410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.538434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.547214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.547237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.556197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.556220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.565128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.565151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.573932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.573955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.582866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.582890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.591699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.591722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.600611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.600634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.609716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.609739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.618677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.618700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.627671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.627694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.636591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.636615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.645424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.645448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.654304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.654327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.846 [2024-12-10 12:13:42.663497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.846 [2024-12-10 12:13:42.663519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.672718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.105 [2024-12-10 12:13:42.672742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.681385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.105 [2024-12-10 12:13:42.681409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.690110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.105 [2024-12-10 12:13:42.690134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.699038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.105 [2024-12-10 12:13:42.699062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.708078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.105 [2024-12-10 12:13:42.708102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.716894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.105 [2024-12-10 12:13:42.716918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.725595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.105 [2024-12-10 12:13:42.725619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.734372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.105 [2024-12-10 12:13:42.734395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.105 [2024-12-10 12:13:42.743139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.743163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.752197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.752220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.761017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.761040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.769945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.769969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.778796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.778819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.787726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.787750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.797022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.797050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.806026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.806049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.815175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.815199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.824151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.824181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.833135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.833158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.842307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.842331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.851500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.851523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.860653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.860676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.869640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.869663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.878585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.878608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.887816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.887839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.896663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.896686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.905727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.905750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.914632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.914655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.106 [2024-12-10 12:13:42.924614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.106 [2024-12-10 12:13:42.924637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:42.934611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:42.934634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:42.942179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:42.942201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:42.953439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:42.953462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:42.963155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:42.963184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:42.972803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:42.972831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:42.980575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:42.980597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:42.991582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:42.991605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.001629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.001652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.009164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.009193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.020323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.020346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.030410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.030432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.040910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.040934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.048442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.048467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.059613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.059637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 14335.00 IOPS, 111.99 MiB/s [2024-12-10T11:13:43.191Z] [2024-12-10 12:13:43.066726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.066750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 00:10:36.365 Latency(us) 00:10:36.365 [2024-12-10T11:13:43.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.365 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:36.365 Nvme1n1 : 5.01 14338.30 112.02 0.00 0.00 8918.37 3573.27 15229.32 00:10:36.365 [2024-12-10T11:13:43.191Z] =================================================================================================================== 00:10:36.365 [2024-12-10T11:13:43.191Z] Total : 14338.30 112.02 0.00 0.00 8918.37 3573.27 15229.32 00:10:36.365 [2024-12-10 12:13:43.074481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.074502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.082488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.082508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.090519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.090541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.098534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.098554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.106563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.106582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.114589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.114614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.122606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.122631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.130642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.130662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.138650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.138670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.365 [2024-12-10 12:13:43.146662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.365 [2024-12-10 12:13:43.146682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.366 [2024-12-10 12:13:43.154690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.366 [2024-12-10 12:13:43.154709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.366 [2024-12-10 12:13:43.162718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.366 [2024-12-10 12:13:43.162738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.366 [2024-12-10 12:13:43.170738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.366 [2024-12-10 12:13:43.170756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.366 [2024-12-10 12:13:43.178758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.366 [2024-12-10 12:13:43.178777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.366 [2024-12-10 12:13:43.186769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.366 [2024-12-10 12:13:43.186789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.194807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.194825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.202821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.202840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.210852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.210871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.218872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.218892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.226886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.226907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.234913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.234932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.242935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.242954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.250945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.250964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.258986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.259005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.266997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.267016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.275006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.275026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.283043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.283063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.291052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.291072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.299083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.299102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.307106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.307125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.315122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.315141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.323151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.323175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.331179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.331199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.339192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.339212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.347228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.347247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.355254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.355273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.363272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.363291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.371288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.371307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.379298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.379317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.387332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.387351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.395350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.395369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.403369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.403389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.411411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.411444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.419426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.419445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.427454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.427473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.435474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.435493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.625 [2024-12-10 12:13:43.443472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.625 [2024-12-10 12:13:43.443493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.451516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.451535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.459532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.459550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.467542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.467561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.475579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.475599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.483592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.483612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.491628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.491648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.499645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.499664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.507652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.507671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.515684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.515703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.523707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.523727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.531714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.531733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.539752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.539771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.547758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.547778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.555787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.555806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.563810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.563829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.571822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.571841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.579858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.579877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.587874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.587893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.595886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.595906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.603925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.603944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.611933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.611952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.619963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.619983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.627986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.628005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.636001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.636020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.644041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.644060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.652052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.652070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.660074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.660094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.668101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.884 [2024-12-10 12:13:43.668121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.884 [2024-12-10 12:13:43.676107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.885 [2024-12-10 12:13:43.676126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.885 [2024-12-10 12:13:43.684139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.885 [2024-12-10 12:13:43.684158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.885 [2024-12-10 12:13:43.692183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.885 [2024-12-10 12:13:43.692203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.885 [2024-12-10 12:13:43.700179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.885 [2024-12-10 12:13:43.700198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.885 [2024-12-10 12:13:43.708219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.885 [2024-12-10 12:13:43.708238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.716247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.716265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.724247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.724267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.732274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.732293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.740301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.740319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.748322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.748341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.756361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.756381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.764353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.764373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.772389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.772408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.780404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.780423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.788420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.788439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.796453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.796472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.804465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.804484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.812495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.812515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.820515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.820536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.828529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.828548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.836569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.836589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.844580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.844600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.852595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.852617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.860629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.860649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.868640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.868663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.876677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.876697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.884696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.884716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.892709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.892728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.900739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.900758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.908762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.908780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.916770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.916789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.924801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.924820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.932824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.932843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.940841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.940860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.948865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.948885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.956875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.956895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.144 [2024-12-10 12:13:43.964910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.144 [2024-12-10 12:13:43.964930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3527684) - No such process 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3527684 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.403 delay0 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.403 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:37.403 [2024-12-10 12:13:44.187404] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:44.073 Initializing NVMe Controllers 00:10:44.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:44.073 Initialization complete. Launching workers. 00:10:44.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 533 00:10:44.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 820, failed to submit 33 00:10:44.073 success 615, unsuccessful 205, failed 0 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.073 rmmod nvme_tcp 00:10:44.073 rmmod nvme_fabrics 00:10:44.073 rmmod nvme_keyring 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3525532 ']' 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3525532 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3525532 ']' 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3525532 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3525532 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3525532' 00:10:44.073 killing process with pid 3525532 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3525532 00:10:44.073 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3525532 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.008 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.910 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.910 00:10:46.910 real 0m34.233s 00:10:46.910 user 0m47.880s 00:10:46.910 sys 0m10.104s 00:10:46.910 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.910 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.910 ************************************ 00:10:46.910 END TEST nvmf_zcopy 00:10:46.910 ************************************ 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.169 ************************************ 00:10:47.169 START TEST nvmf_nmic 00:10:47.169 ************************************ 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:47.169 * Looking for test storage... 00:10:47.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:47.169 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.170 --rc genhtml_branch_coverage=1 00:10:47.170 --rc genhtml_function_coverage=1 00:10:47.170 --rc genhtml_legend=1 00:10:47.170 --rc geninfo_all_blocks=1 00:10:47.170 --rc geninfo_unexecuted_blocks=1 00:10:47.170 00:10:47.170 ' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.170 --rc genhtml_branch_coverage=1 00:10:47.170 --rc genhtml_function_coverage=1 00:10:47.170 --rc genhtml_legend=1 00:10:47.170 --rc geninfo_all_blocks=1 00:10:47.170 --rc geninfo_unexecuted_blocks=1 00:10:47.170 00:10:47.170 ' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.170 --rc genhtml_branch_coverage=1 00:10:47.170 --rc genhtml_function_coverage=1 00:10:47.170 --rc genhtml_legend=1 00:10:47.170 --rc geninfo_all_blocks=1 00:10:47.170 --rc geninfo_unexecuted_blocks=1 00:10:47.170 00:10:47.170 ' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.170 --rc genhtml_branch_coverage=1 00:10:47.170 --rc genhtml_function_coverage=1 00:10:47.170 --rc genhtml_legend=1 00:10:47.170 --rc geninfo_all_blocks=1 00:10:47.170 --rc geninfo_unexecuted_blocks=1 00:10:47.170 00:10:47.170 ' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.170 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:52.442 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:52.442 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:52.442 Found net devices under 0000:af:00.0: cvl_0_0 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:52.442 Found net devices under 0000:af:00.1: cvl_0_1 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.442 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.443 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:10:52.702 00:10:52.702 --- 10.0.0.2 ping statistics --- 00:10:52.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.702 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:52.702 00:10:52.702 --- 10.0.0.1 ping statistics --- 00:10:52.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.702 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3533488 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3533488 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3533488 ']' 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.702 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.702 [2024-12-10 12:13:59.478934] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:52.702 [2024-12-10 12:13:59.479017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.961 [2024-12-10 12:13:59.595391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.961 [2024-12-10 12:13:59.708092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.961 [2024-12-10 12:13:59.708135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.961 [2024-12-10 12:13:59.708146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.961 [2024-12-10 12:13:59.708156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.961 [2024-12-10 12:13:59.708163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.961 [2024-12-10 12:13:59.710553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.961 [2024-12-10 12:13:59.710634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.961 [2024-12-10 12:13:59.710696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.961 [2024-12-10 12:13:59.710706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.527 [2024-12-10 12:14:00.325671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.527 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.786 Malloc0 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.786 [2024-12-10 12:14:00.448796] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:53.786 test case1: single bdev can't be used in multiple subsystems 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:53.786 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.787 [2024-12-10 12:14:00.480666] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:53.787 [2024-12-10 12:14:00.480699] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:53.787 [2024-12-10 12:14:00.480710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.787 request: 00:10:53.787 { 00:10:53.787 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:53.787 "namespace": { 00:10:53.787 "bdev_name": "Malloc0", 00:10:53.787 "no_auto_visible": false, 00:10:53.787 "hide_metadata": false 00:10:53.787 }, 00:10:53.787 "method": "nvmf_subsystem_add_ns", 00:10:53.787 "req_id": 1 00:10:53.787 } 00:10:53.787 Got JSON-RPC error response 00:10:53.787 response: 00:10:53.787 { 00:10:53.787 "code": -32602, 00:10:53.787 "message": "Invalid parameters" 00:10:53.787 } 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:53.787 Adding namespace failed - expected result. 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:53.787 test case2: host connect to nvmf target in multiple paths 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.787 [2024-12-10 12:14:00.492818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.787 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.163 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:56.097 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.097 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.097 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.097 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.097 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.998 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.998 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.998 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.998 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:57.998 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.998 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:57.998 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:57.998 [global] 00:10:57.998 thread=1 00:10:57.998 invalidate=1 00:10:57.998 rw=write 00:10:57.998 time_based=1 00:10:57.998 runtime=1 00:10:57.998 ioengine=libaio 00:10:57.998 direct=1 00:10:57.998 bs=4096 00:10:57.998 iodepth=1 00:10:57.998 norandommap=0 00:10:57.998 numjobs=1 00:10:57.998 00:10:57.998 verify_dump=1 00:10:57.998 verify_backlog=512 00:10:57.998 verify_state_save=0 00:10:57.998 do_verify=1 00:10:57.998 verify=crc32c-intel 00:10:57.998 [job0] 00:10:57.998 filename=/dev/nvme0n1 00:10:57.998 Could not set queue depth (nvme0n1) 00:10:58.257 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.257 fio-3.35 00:10:58.257 Starting 1 thread 00:10:59.632 00:10:59.632 job0: (groupid=0, jobs=1): err= 0: pid=3534544: Tue Dec 10 12:14:06 2024 00:10:59.632 read: IOPS=2289, BW=9159KiB/s (9379kB/s)(9168KiB/1001msec) 00:10:59.632 slat (nsec): min=6320, max=24930, avg=7150.07, stdev=762.05 00:10:59.632 clat (usec): min=188, max=346, avg=236.55, stdev=23.98 00:10:59.632 lat (usec): min=195, max=354, avg=243.70, stdev=23.99 00:10:59.632 clat percentiles (usec): 00:10:59.632 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:10:59.632 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:10:59.632 | 70.00th=[ 235], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 281], 00:10:59.632 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 302], 99.95th=[ 334], 00:10:59.632 | 99.99th=[ 347] 00:10:59.632 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:59.632 slat (nsec): min=9219, max=49034, avg=10230.64, stdev=1438.69 00:10:59.632 clat (usec): min=120, max=399, avg=158.13, stdev= 8.37 00:10:59.632 lat (usec): min=135, max=439, avg=168.36, stdev= 8.69 00:10:59.632 clat percentiles (usec): 00:10:59.632 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:10:59.632 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 157], 60.00th=[ 159], 00:10:59.632 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 169], 00:10:59.632 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 217], 99.95th=[ 221], 00:10:59.632 | 99.99th=[ 400] 00:10:59.632 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:10:59.632 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:59.632 lat (usec) : 250=87.96%, 500=12.04% 00:10:59.632 cpu : usr=2.20%, sys=4.50%, ctx=4853, majf=0, minf=1 00:10:59.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.632 issued rwts: total=2292,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.632 00:10:59.632 Run status group 0 (all jobs): 00:10:59.632 READ: bw=9159KiB/s (9379kB/s), 9159KiB/s-9159KiB/s (9379kB/s-9379kB/s), io=9168KiB (9388kB), run=1001-1001msec 00:10:59.632 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:59.632 00:10:59.632 Disk stats (read/write): 00:10:59.632 nvme0n1: ios=2098/2326, merge=0/0, ticks=484/349, in_queue=833, util=91.48% 00:10:59.632 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.199 rmmod nvme_tcp 00:11:00.199 rmmod nvme_fabrics 00:11:00.199 rmmod nvme_keyring 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3533488 ']' 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3533488 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3533488 ']' 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3533488 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3533488 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3533488' 00:11:00.199 killing process with pid 3533488 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3533488 00:11:00.199 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3533488 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.575 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.109 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.109 00:11:04.109 real 0m16.547s 00:11:04.109 user 0m40.591s 00:11:04.109 sys 0m5.186s 00:11:04.109 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:04.110 ************************************ 00:11:04.110 END TEST nvmf_nmic 00:11:04.110 ************************************ 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.110 ************************************ 00:11:04.110 START TEST nvmf_fio_target 00:11:04.110 ************************************ 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:04.110 * Looking for test storage... 00:11:04.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.110 --rc genhtml_branch_coverage=1 00:11:04.110 --rc genhtml_function_coverage=1 00:11:04.110 --rc genhtml_legend=1 00:11:04.110 --rc geninfo_all_blocks=1 00:11:04.110 --rc geninfo_unexecuted_blocks=1 00:11:04.110 00:11:04.110 ' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.110 --rc genhtml_branch_coverage=1 00:11:04.110 --rc genhtml_function_coverage=1 00:11:04.110 --rc genhtml_legend=1 00:11:04.110 --rc geninfo_all_blocks=1 00:11:04.110 --rc geninfo_unexecuted_blocks=1 00:11:04.110 00:11:04.110 ' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.110 --rc genhtml_branch_coverage=1 00:11:04.110 --rc genhtml_function_coverage=1 00:11:04.110 --rc genhtml_legend=1 00:11:04.110 --rc geninfo_all_blocks=1 00:11:04.110 --rc geninfo_unexecuted_blocks=1 00:11:04.110 00:11:04.110 ' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.110 --rc genhtml_branch_coverage=1 00:11:04.110 --rc genhtml_function_coverage=1 00:11:04.110 --rc genhtml_legend=1 00:11:04.110 --rc geninfo_all_blocks=1 00:11:04.110 --rc geninfo_unexecuted_blocks=1 00:11:04.110 00:11:04.110 ' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.110 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.111 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:09.377 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:09.377 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:09.377 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:09.378 Found net devices under 0000:af:00.0: cvl_0_0 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:09.378 Found net devices under 0000:af:00.1: cvl_0_1 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:09.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:11:09.378 00:11:09.378 --- 10.0.0.2 ping statistics --- 00:11:09.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.378 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:11:09.378 00:11:09.378 --- 10.0.0.1 ping statistics --- 00:11:09.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.378 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3538467 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3538467 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3538467 ']' 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.378 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.378 [2024-12-10 12:14:16.017171] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:09.378 [2024-12-10 12:14:16.017259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.378 [2024-12-10 12:14:16.133638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.636 [2024-12-10 12:14:16.244023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.636 [2024-12-10 12:14:16.244064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.636 [2024-12-10 12:14:16.244075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.636 [2024-12-10 12:14:16.244101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.637 [2024-12-10 12:14:16.244109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.637 [2024-12-10 12:14:16.246443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.637 [2024-12-10 12:14:16.246517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.637 [2024-12-10 12:14:16.246619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.637 [2024-12-10 12:14:16.246629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.203 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.203 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:10.203 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.203 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.203 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.204 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.204 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:10.462 [2024-12-10 12:14:17.047711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.462 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.720 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:10.720 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.978 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:10.978 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.237 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:11.237 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.495 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:11.495 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:11.753 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.012 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:12.012 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.270 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:12.270 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.528 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:12.528 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:12.786 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.786 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:12.786 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.045 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:13.045 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.303 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.561 [2024-12-10 12:14:20.130914] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.561 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:13.561 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:13.819 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.196 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:15.196 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:15.196 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.196 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:15.196 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:15.196 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:17.096 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:17.096 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:17.096 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.096 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:17.096 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.096 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:17.096 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:17.096 [global] 00:11:17.096 thread=1 00:11:17.096 invalidate=1 00:11:17.096 rw=write 00:11:17.096 time_based=1 00:11:17.096 runtime=1 00:11:17.096 ioengine=libaio 00:11:17.096 direct=1 00:11:17.096 bs=4096 00:11:17.096 iodepth=1 00:11:17.096 norandommap=0 00:11:17.096 numjobs=1 00:11:17.096 00:11:17.096 verify_dump=1 00:11:17.096 verify_backlog=512 00:11:17.096 verify_state_save=0 00:11:17.096 do_verify=1 00:11:17.096 verify=crc32c-intel 00:11:17.096 [job0] 00:11:17.096 filename=/dev/nvme0n1 00:11:17.096 [job1] 00:11:17.096 filename=/dev/nvme0n2 00:11:17.096 [job2] 00:11:17.096 filename=/dev/nvme0n3 00:11:17.096 [job3] 00:11:17.096 filename=/dev/nvme0n4 00:11:17.096 Could not set queue depth (nvme0n1) 00:11:17.096 Could not set queue depth (nvme0n2) 00:11:17.096 Could not set queue depth (nvme0n3) 00:11:17.096 Could not set queue depth (nvme0n4) 00:11:17.354 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.354 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.354 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.354 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.354 fio-3.35 00:11:17.354 Starting 4 threads 00:11:18.757 00:11:18.757 job0: (groupid=0, jobs=1): err= 0: pid=3540006: Tue Dec 10 12:14:25 2024 00:11:18.757 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:11:18.757 slat (nsec): min=10103, max=25543, avg=23246.32, stdev=2973.21 00:11:18.757 clat (usec): min=40792, max=41959, avg=41013.55, stdev=224.62 00:11:18.757 lat (usec): min=40816, max=41982, avg=41036.80, stdev=224.32 00:11:18.757 clat percentiles (usec): 00:11:18.757 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:18.757 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:18.757 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:18.757 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:18.757 | 99.99th=[42206] 00:11:18.757 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:18.757 slat (nsec): min=9581, max=48637, avg=11056.87, stdev=2370.06 00:11:18.757 clat (usec): min=144, max=707, avg=184.70, stdev=28.59 00:11:18.757 lat (usec): min=154, max=718, avg=195.75, stdev=29.14 00:11:18.757 clat percentiles (usec): 00:11:18.757 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:11:18.757 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:11:18.757 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:11:18.758 | 99.00th=[ 223], 99.50th=[ 297], 99.90th=[ 709], 99.95th=[ 709], 00:11:18.758 | 99.99th=[ 709] 00:11:18.758 bw ( KiB/s): min= 4096, max= 4096, per=51.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.758 lat (usec) : 250=95.13%, 500=0.56%, 750=0.19% 00:11:18.758 lat (msec) : 50=4.12% 00:11:18.758 cpu : usr=0.40%, sys=0.40%, ctx=535, majf=0, minf=1 00:11:18.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.758 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.758 job1: (groupid=0, jobs=1): err= 0: pid=3540012: Tue Dec 10 12:14:25 2024 00:11:18.758 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:11:18.758 slat (nsec): min=10760, max=24488, avg=22729.32, stdev=2728.84 00:11:18.758 clat (usec): min=40667, max=41139, avg=40959.76, stdev=99.91 00:11:18.758 lat (usec): min=40677, max=41161, avg=40982.49, stdev=101.59 00:11:18.758 clat percentiles (usec): 00:11:18.758 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:18.758 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:18.758 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:18.758 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:18.758 | 99.99th=[41157] 00:11:18.758 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:11:18.758 slat (usec): min=9, max=1026, avg=14.76, stdev=55.21 00:11:18.758 clat (usec): min=130, max=382, avg=218.84, stdev=25.45 00:11:18.758 lat (usec): min=141, max=1283, avg=233.60, stdev=61.25 00:11:18.758 clat percentiles (usec): 00:11:18.758 | 1.00th=[ 159], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 202], 00:11:18.758 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:11:18.758 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 251], 00:11:18.758 | 99.00th=[ 273], 99.50th=[ 314], 99.90th=[ 383], 99.95th=[ 383], 00:11:18.758 | 99.99th=[ 383] 00:11:18.758 bw ( KiB/s): min= 4096, max= 4096, per=51.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.758 lat (usec) : 250=91.20%, 500=4.68% 00:11:18.758 lat (msec) : 50=4.12% 00:11:18.758 cpu : usr=0.20%, sys=0.59%, ctx=537, majf=0, minf=2 00:11:18.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.758 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.758 job2: (groupid=0, jobs=1): err= 0: pid=3540013: Tue Dec 10 12:14:25 2024 00:11:18.758 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:11:18.758 slat (nsec): min=10393, max=24649, avg=23449.61, stdev=2860.95 00:11:18.758 clat (usec): min=445, max=41918, avg=39275.27, stdev=8467.63 00:11:18.758 lat (usec): min=469, max=41942, avg=39298.72, stdev=8467.40 00:11:18.758 clat percentiles (usec): 00:11:18.758 | 1.00th=[ 445], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:18.758 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:18.758 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:18.758 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:18.758 | 99.99th=[41681] 00:11:18.758 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:11:18.758 slat (nsec): min=5004, max=40615, avg=11413.20, stdev=4006.54 00:11:18.758 clat (usec): min=155, max=693, avg=195.76, stdev=40.94 00:11:18.758 lat (usec): min=161, max=704, avg=207.18, stdev=41.43 00:11:18.758 clat percentiles (usec): 00:11:18.758 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 178], 00:11:18.758 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:11:18.758 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 217], 95.00th=[ 273], 00:11:18.758 | 99.00th=[ 310], 99.50th=[ 412], 99.90th=[ 693], 99.95th=[ 693], 00:11:18.758 | 99.99th=[ 693] 00:11:18.758 bw ( KiB/s): min= 4096, max= 4096, per=51.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.758 lat (usec) : 250=90.09%, 500=5.42%, 750=0.37% 00:11:18.758 lat (msec) : 50=4.11% 00:11:18.758 cpu : usr=0.20%, sys=0.59%, ctx=538, majf=0, minf=1 00:11:18.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.758 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.758 job3: (groupid=0, jobs=1): err= 0: pid=3540014: Tue Dec 10 12:14:25 2024 00:11:18.758 read: IOPS=498, BW=1992KiB/s (2040kB/s)(2024KiB/1016msec) 00:11:18.758 slat (nsec): min=6747, max=29713, avg=8265.58, stdev=3536.98 00:11:18.758 clat (usec): min=187, max=41258, avg=1758.62, stdev=7755.06 00:11:18.758 lat (usec): min=194, max=41268, avg=1766.89, stdev=7758.02 00:11:18.758 clat percentiles (usec): 00:11:18.758 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 219], 00:11:18.758 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:11:18.758 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 260], 00:11:18.758 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:18.758 | 99.99th=[41157] 00:11:18.758 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:11:18.758 slat (nsec): min=10048, max=50569, avg=11611.06, stdev=2302.35 00:11:18.758 clat (usec): min=158, max=388, avg=219.21, stdev=24.12 00:11:18.758 lat (usec): min=171, max=399, avg=230.82, stdev=24.58 00:11:18.758 clat percentiles (usec): 00:11:18.758 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 204], 00:11:18.758 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:11:18.758 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 253], 00:11:18.758 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 388], 99.95th=[ 388], 00:11:18.758 | 99.99th=[ 388] 00:11:18.758 bw ( KiB/s): min= 4096, max= 4096, per=51.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.758 lat (usec) : 250=92.34%, 500=5.80% 00:11:18.758 lat (msec) : 50=1.87% 00:11:18.758 cpu : usr=0.69%, sys=0.79%, ctx=1019, majf=0, minf=2 00:11:18.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.758 issued rwts: total=506,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.758 00:11:18.758 Run status group 0 (all jobs): 00:11:18.758 READ: bw=2240KiB/s (2294kB/s), 86.0KiB/s-1992KiB/s (88.1kB/s-2040kB/s), io=2292KiB (2347kB), run=1005-1023msec 00:11:18.758 WRITE: bw=8008KiB/s (8200kB/s), 2002KiB/s-2038KiB/s (2050kB/s-2087kB/s), io=8192KiB (8389kB), run=1005-1023msec 00:11:18.758 00:11:18.758 Disk stats (read/write): 00:11:18.758 nvme0n1: ios=41/512, merge=0/0, ticks=1603/94, in_queue=1697, util=85.77% 00:11:18.758 nvme0n2: ios=82/512, merge=0/0, ticks=786/107, in_queue=893, util=90.56% 00:11:18.758 nvme0n3: ios=82/512, merge=0/0, ticks=1111/96, in_queue=1207, util=94.69% 00:11:18.758 nvme0n4: ios=565/512, merge=0/0, ticks=814/111, in_queue=925, util=95.39% 00:11:18.758 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:18.758 [global] 00:11:18.758 thread=1 00:11:18.758 invalidate=1 00:11:18.758 rw=randwrite 00:11:18.758 time_based=1 00:11:18.758 runtime=1 00:11:18.758 ioengine=libaio 00:11:18.758 direct=1 00:11:18.758 bs=4096 00:11:18.758 iodepth=1 00:11:18.758 norandommap=0 00:11:18.758 numjobs=1 00:11:18.758 00:11:18.758 verify_dump=1 00:11:18.758 verify_backlog=512 00:11:18.758 verify_state_save=0 00:11:18.758 do_verify=1 00:11:18.758 verify=crc32c-intel 00:11:18.758 [job0] 00:11:18.758 filename=/dev/nvme0n1 00:11:18.758 [job1] 00:11:18.758 filename=/dev/nvme0n2 00:11:18.758 [job2] 00:11:18.758 filename=/dev/nvme0n3 00:11:18.758 [job3] 00:11:18.758 filename=/dev/nvme0n4 00:11:18.758 Could not set queue depth (nvme0n1) 00:11:18.758 Could not set queue depth (nvme0n2) 00:11:18.758 Could not set queue depth (nvme0n3) 00:11:18.758 Could not set queue depth (nvme0n4) 00:11:19.019 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.020 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.020 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.020 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.020 fio-3.35 00:11:19.020 Starting 4 threads 00:11:20.391 00:11:20.391 job0: (groupid=0, jobs=1): err= 0: pid=3540376: Tue Dec 10 12:14:26 2024 00:11:20.391 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:11:20.391 slat (nsec): min=9744, max=23713, avg=22580.67, stdev=2953.07 00:11:20.391 clat (usec): min=40814, max=41189, avg=40978.27, stdev=96.16 00:11:20.391 lat (usec): min=40837, max=41199, avg=41000.85, stdev=94.75 00:11:20.391 clat percentiles (usec): 00:11:20.391 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:20.391 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:20.391 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:20.391 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:20.391 | 99.99th=[41157] 00:11:20.391 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:20.391 slat (nsec): min=9503, max=39469, avg=11957.37, stdev=2199.25 00:11:20.391 clat (usec): min=145, max=481, avg=257.94, stdev=29.36 00:11:20.391 lat (usec): min=155, max=521, avg=269.90, stdev=30.45 00:11:20.391 clat percentiles (usec): 00:11:20.391 | 1.00th=[ 180], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 229], 00:11:20.391 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:11:20.391 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:11:20.391 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 482], 99.95th=[ 482], 00:11:20.391 | 99.99th=[ 482] 00:11:20.391 bw ( KiB/s): min= 4096, max= 4096, per=50.45%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.391 lat (usec) : 250=30.96%, 500=65.10% 00:11:20.391 lat (msec) : 50=3.94% 00:11:20.391 cpu : usr=0.20%, sys=0.70%, ctx=535, majf=0, minf=1 00:11:20.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.391 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.391 job1: (groupid=0, jobs=1): err= 0: pid=3540377: Tue Dec 10 12:14:26 2024 00:11:20.391 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:11:20.391 slat (nsec): min=9812, max=24335, avg=22281.73, stdev=2879.36 00:11:20.391 clat (usec): min=40766, max=41213, avg=40976.35, stdev=90.24 00:11:20.391 lat (usec): min=40791, max=41223, avg=40998.63, stdev=88.54 00:11:20.391 clat percentiles (usec): 00:11:20.391 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:20.391 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:20.391 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:20.391 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:20.391 | 99.99th=[41157] 00:11:20.391 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:20.391 slat (nsec): min=10235, max=36498, avg=11559.03, stdev=2014.04 00:11:20.391 clat (usec): min=154, max=338, avg=181.51, stdev=14.74 00:11:20.391 lat (usec): min=164, max=375, avg=193.07, stdev=15.48 00:11:20.391 clat percentiles (usec): 00:11:20.391 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:11:20.391 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:11:20.391 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:11:20.391 | 99.00th=[ 225], 99.50th=[ 247], 99.90th=[ 338], 99.95th=[ 338], 00:11:20.391 | 99.99th=[ 338] 00:11:20.391 bw ( KiB/s): min= 4096, max= 4096, per=50.45%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.391 lat (usec) : 250=95.51%, 500=0.37% 00:11:20.391 lat (msec) : 50=4.12% 00:11:20.391 cpu : usr=0.80%, sys=0.50%, ctx=535, majf=0, minf=1 00:11:20.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.391 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.391 job2: (groupid=0, jobs=1): err= 0: pid=3540378: Tue Dec 10 12:14:26 2024 00:11:20.391 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:11:20.391 slat (nsec): min=9884, max=24720, avg=22577.95, stdev=2974.59 00:11:20.391 clat (usec): min=40693, max=41090, avg=40953.65, stdev=84.18 00:11:20.391 lat (usec): min=40703, max=41113, avg=40976.23, stdev=86.25 00:11:20.391 clat percentiles (usec): 00:11:20.391 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:20.391 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:20.391 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:20.391 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:20.391 | 99.99th=[41157] 00:11:20.391 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:20.391 slat (nsec): min=10676, max=37774, avg=12440.71, stdev=2157.40 00:11:20.391 clat (usec): min=170, max=463, avg=259.77, stdev=30.26 00:11:20.391 lat (usec): min=181, max=500, avg=272.21, stdev=30.90 00:11:20.391 clat percentiles (usec): 00:11:20.391 | 1.00th=[ 186], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 229], 00:11:20.391 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:11:20.391 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:11:20.391 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 465], 99.95th=[ 465], 00:11:20.391 | 99.99th=[ 465] 00:11:20.391 bw ( KiB/s): min= 4096, max= 4096, per=50.45%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.391 lat (usec) : 250=28.52%, 500=67.54% 00:11:20.391 lat (msec) : 50=3.94% 00:11:20.391 cpu : usr=0.50%, sys=0.90%, ctx=534, majf=0, minf=1 00:11:20.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.391 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.391 job3: (groupid=0, jobs=1): err= 0: pid=3540379: Tue Dec 10 12:14:26 2024 00:11:20.391 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:11:20.392 slat (nsec): min=9554, max=28650, avg=23936.91, stdev=3419.03 00:11:20.392 clat (usec): min=40809, max=41954, avg=41001.06, stdev=220.96 00:11:20.392 lat (usec): min=40818, max=41983, avg=41025.00, stdev=222.52 00:11:20.392 clat percentiles (usec): 00:11:20.392 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:20.392 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:20.392 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:20.392 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:20.392 | 99.99th=[42206] 00:11:20.392 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:11:20.392 slat (nsec): min=10460, max=37780, avg=12155.74, stdev=2255.43 00:11:20.392 clat (usec): min=155, max=305, avg=191.47, stdev=14.26 00:11:20.392 lat (usec): min=166, max=343, avg=203.63, stdev=14.86 00:11:20.392 clat percentiles (usec): 00:11:20.392 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:11:20.392 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:11:20.392 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 217], 00:11:20.392 | 99.00th=[ 231], 99.50th=[ 245], 99.90th=[ 306], 99.95th=[ 306], 00:11:20.392 | 99.99th=[ 306] 00:11:20.392 bw ( KiB/s): min= 4096, max= 4096, per=50.45%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.392 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.392 lat (usec) : 250=95.51%, 500=0.37% 00:11:20.392 lat (msec) : 50=4.12% 00:11:20.392 cpu : usr=0.50%, sys=0.89%, ctx=535, majf=0, minf=1 00:11:20.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.392 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.392 00:11:20.392 Run status group 0 (all jobs): 00:11:20.392 READ: bw=341KiB/s (349kB/s), 83.8KiB/s-87.7KiB/s (85.8kB/s-89.8kB/s), io=344KiB (352kB), run=1001-1009msec 00:11:20.392 WRITE: bw=8119KiB/s (8314kB/s), 2030KiB/s-2046KiB/s (2078kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1009msec 00:11:20.392 00:11:20.392 Disk stats (read/write): 00:11:20.392 nvme0n1: ios=43/512, merge=0/0, ticks=1641/129, in_queue=1770, util=94.18% 00:11:20.392 nvme0n2: ios=43/512, merge=0/0, ticks=1722/87, in_queue=1809, util=98.27% 00:11:20.392 nvme0n3: ios=57/512, merge=0/0, ticks=1742/126, in_queue=1868, util=98.54% 00:11:20.392 nvme0n4: ios=63/512, merge=0/0, ticks=1690/93, in_queue=1783, util=96.54% 00:11:20.392 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:20.392 [global] 00:11:20.392 thread=1 00:11:20.392 invalidate=1 00:11:20.392 rw=write 00:11:20.392 time_based=1 00:11:20.392 runtime=1 00:11:20.392 ioengine=libaio 00:11:20.392 direct=1 00:11:20.392 bs=4096 00:11:20.392 iodepth=128 00:11:20.392 norandommap=0 00:11:20.392 numjobs=1 00:11:20.392 00:11:20.392 verify_dump=1 00:11:20.392 verify_backlog=512 00:11:20.392 verify_state_save=0 00:11:20.392 do_verify=1 00:11:20.392 verify=crc32c-intel 00:11:20.392 [job0] 00:11:20.392 filename=/dev/nvme0n1 00:11:20.392 [job1] 00:11:20.392 filename=/dev/nvme0n2 00:11:20.392 [job2] 00:11:20.392 filename=/dev/nvme0n3 00:11:20.392 [job3] 00:11:20.392 filename=/dev/nvme0n4 00:11:20.392 Could not set queue depth (nvme0n1) 00:11:20.392 Could not set queue depth (nvme0n2) 00:11:20.392 Could not set queue depth (nvme0n3) 00:11:20.392 Could not set queue depth (nvme0n4) 00:11:20.392 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.392 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.392 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.392 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.392 fio-3.35 00:11:20.392 Starting 4 threads 00:11:21.762 00:11:21.762 job0: (groupid=0, jobs=1): err= 0: pid=3540744: Tue Dec 10 12:14:28 2024 00:11:21.762 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:11:21.762 slat (nsec): min=1323, max=11884k, avg=106127.59, stdev=744149.71 00:11:21.762 clat (usec): min=3702, max=33630, avg=12737.95, stdev=4506.54 00:11:21.762 lat (usec): min=3710, max=41800, avg=12844.08, stdev=4568.18 00:11:21.762 clat percentiles (usec): 00:11:21.762 | 1.00th=[ 6063], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9503], 00:11:21.762 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11600], 60.00th=[12518], 00:11:21.762 | 70.00th=[12911], 80.00th=[15139], 90.00th=[19268], 95.00th=[23200], 00:11:21.762 | 99.00th=[27395], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:11:21.762 | 99.99th=[33817] 00:11:21.762 write: IOPS=4271, BW=16.7MiB/s (17.5MB/s)(16.9MiB/1013msec); 0 zone resets 00:11:21.762 slat (usec): min=2, max=11913, avg=123.64, stdev=645.89 00:11:21.762 clat (usec): min=1509, max=40563, avg=17641.14, stdev=8861.49 00:11:21.762 lat (usec): min=1524, max=40568, avg=17764.78, stdev=8925.33 00:11:21.762 clat percentiles (usec): 00:11:21.762 | 1.00th=[ 3720], 5.00th=[ 5342], 10.00th=[ 7767], 20.00th=[ 8848], 00:11:21.762 | 30.00th=[11469], 40.00th=[13566], 50.00th=[17695], 60.00th=[18744], 00:11:21.762 | 70.00th=[19530], 80.00th=[25297], 90.00th=[31589], 95.00th=[34866], 00:11:21.762 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:11:21.762 | 99.99th=[40633] 00:11:21.762 bw ( KiB/s): min=14280, max=19312, per=29.56%, avg=16796.00, stdev=3558.16, samples=2 00:11:21.762 iops : min= 3570, max= 4828, avg=4199.00, stdev=889.54, samples=2 00:11:21.762 lat (msec) : 2=0.08%, 4=0.64%, 10=27.07%, 20=52.78%, 50=19.42% 00:11:21.762 cpu : usr=4.25%, sys=4.45%, ctx=420, majf=0, minf=1 00:11:21.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:21.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.762 issued rwts: total=4096,4327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.762 job1: (groupid=0, jobs=1): err= 0: pid=3540745: Tue Dec 10 12:14:28 2024 00:11:21.762 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(11.3MiB/1017msec) 00:11:21.762 slat (nsec): min=1636, max=22144k, avg=123403.01, stdev=960183.77 00:11:21.762 clat (usec): min=3596, max=45564, avg=16165.38, stdev=7396.97 00:11:21.762 lat (usec): min=3603, max=45588, avg=16288.79, stdev=7466.10 00:11:21.762 clat percentiles (usec): 00:11:21.762 | 1.00th=[ 7242], 5.00th=[10552], 10.00th=[11207], 20.00th=[11469], 00:11:21.762 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12780], 60.00th=[13566], 00:11:21.762 | 70.00th=[15139], 80.00th=[21627], 90.00th=[29492], 95.00th=[33817], 00:11:21.762 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40633], 99.95th=[44827], 00:11:21.762 | 99.99th=[45351] 00:11:21.762 write: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec); 0 zone resets 00:11:21.762 slat (usec): min=2, max=65176, avg=205.61, stdev=1750.62 00:11:21.762 clat (msec): min=4, max=108, avg=22.22, stdev=14.86 00:11:21.762 lat (msec): min=4, max=144, avg=22.42, stdev=15.12 00:11:21.762 clat percentiles (msec): 00:11:21.762 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:11:21.762 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 17], 60.00th=[ 19], 00:11:21.762 | 70.00th=[ 22], 80.00th=[ 42], 90.00th=[ 51], 95.00th=[ 53], 00:11:21.762 | 99.00th=[ 57], 99.50th=[ 57], 99.90th=[ 64], 99.95th=[ 69], 00:11:21.762 | 99.99th=[ 109] 00:11:21.762 bw ( KiB/s): min= 9016, max=15560, per=21.63%, avg=12288.00, stdev=4627.31, samples=2 00:11:21.762 iops : min= 2254, max= 3890, avg=3072.00, stdev=1156.83, samples=2 00:11:21.762 lat (msec) : 4=0.12%, 10=3.20%, 20=69.94%, 50=21.57%, 100=5.15% 00:11:21.762 lat (msec) : 250=0.02% 00:11:21.762 cpu : usr=2.36%, sys=4.43%, ctx=238, majf=0, minf=1 00:11:21.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:21.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.762 issued rwts: total=2889,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.763 job2: (groupid=0, jobs=1): err= 0: pid=3540747: Tue Dec 10 12:14:28 2024 00:11:21.763 read: IOPS=3765, BW=14.7MiB/s (15.4MB/s)(15.0MiB/1017msec) 00:11:21.763 slat (nsec): min=1554, max=12093k, avg=115931.83, stdev=771551.47 00:11:21.763 clat (usec): min=4186, max=45646, avg=13446.37, stdev=5941.00 00:11:21.763 lat (usec): min=4193, max=45662, avg=13562.30, stdev=6008.57 00:11:21.763 clat percentiles (usec): 00:11:21.763 | 1.00th=[ 6587], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9765], 00:11:21.763 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11731], 60.00th=[12125], 00:11:21.763 | 70.00th=[12518], 80.00th=[15926], 90.00th=[19006], 95.00th=[25822], 00:11:21.763 | 99.00th=[40109], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:11:21.763 | 99.99th=[45876] 00:11:21.763 write: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1017msec); 0 zone resets 00:11:21.763 slat (usec): min=2, max=25620, avg=131.62, stdev=756.79 00:11:21.763 clat (usec): min=1430, max=46916, avg=18068.79, stdev=11221.23 00:11:21.763 lat (usec): min=1447, max=46924, avg=18200.40, stdev=11298.66 00:11:21.763 clat percentiles (usec): 00:11:21.763 | 1.00th=[ 4146], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[ 9241], 00:11:21.763 | 30.00th=[10159], 40.00th=[10683], 50.00th=[12518], 60.00th=[17695], 00:11:21.763 | 70.00th=[19006], 80.00th=[29754], 90.00th=[38011], 95.00th=[40633], 00:11:21.763 | 99.00th=[44827], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:11:21.763 | 99.99th=[46924] 00:11:21.763 bw ( KiB/s): min=12304, max=20464, per=28.84%, avg=16384.00, stdev=5769.99, samples=2 00:11:21.763 iops : min= 3076, max= 5116, avg=4096.00, stdev=1442.50, samples=2 00:11:21.763 lat (msec) : 2=0.03%, 4=0.48%, 10=24.22%, 20=56.27%, 50=19.00% 00:11:21.763 cpu : usr=3.84%, sys=4.92%, ctx=408, majf=0, minf=1 00:11:21.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:21.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.763 issued rwts: total=3830,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.763 job3: (groupid=0, jobs=1): err= 0: pid=3540750: Tue Dec 10 12:14:28 2024 00:11:21.763 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:11:21.763 slat (nsec): min=1575, max=22022k, avg=162293.76, stdev=1178861.24 00:11:21.763 clat (msec): min=8, max=110, avg=16.87, stdev=12.16 00:11:21.763 lat (msec): min=8, max=110, avg=17.04, stdev=12.33 00:11:21.763 clat percentiles (msec): 00:11:21.763 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:11:21.763 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:11:21.763 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 25], 95.00th=[ 36], 00:11:21.763 | 99.00th=[ 86], 99.50th=[ 104], 99.90th=[ 111], 99.95th=[ 111], 00:11:21.763 | 99.99th=[ 111] 00:11:21.763 write: IOPS=2907, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1015msec); 0 zone resets 00:11:21.763 slat (usec): min=2, max=19610, avg=190.91, stdev=968.47 00:11:21.763 clat (msec): min=8, max=125, avg=28.63, stdev=18.06 00:11:21.763 lat (msec): min=10, max=125, avg=28.82, stdev=18.13 00:11:21.763 clat percentiles (msec): 00:11:21.763 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 18], 00:11:21.763 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 26], 00:11:21.763 | 70.00th=[ 32], 80.00th=[ 40], 90.00th=[ 48], 95.00th=[ 49], 00:11:21.763 | 99.00th=[ 114], 99.50th=[ 125], 99.90th=[ 126], 99.95th=[ 127], 00:11:21.763 | 99.99th=[ 127] 00:11:21.763 bw ( KiB/s): min= 9528, max=13056, per=19.87%, avg=11292.00, stdev=2494.67, samples=2 00:11:21.763 iops : min= 2382, max= 3264, avg=2823.00, stdev=623.67, samples=2 00:11:21.763 lat (msec) : 10=0.69%, 20=64.63%, 50=31.45%, 100=1.71%, 250=1.52% 00:11:21.763 cpu : usr=2.07%, sys=4.04%, ctx=348, majf=0, minf=1 00:11:21.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:21.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.763 issued rwts: total=2560,2951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.763 00:11:21.763 Run status group 0 (all jobs): 00:11:21.763 READ: bw=51.4MiB/s (53.9MB/s), 9.85MiB/s-15.8MiB/s (10.3MB/s-16.6MB/s), io=52.2MiB (54.8MB), run=1013-1017msec 00:11:21.763 WRITE: bw=55.5MiB/s (58.2MB/s), 11.4MiB/s-16.7MiB/s (11.9MB/s-17.5MB/s), io=56.4MiB (59.2MB), run=1013-1017msec 00:11:21.763 00:11:21.763 Disk stats (read/write): 00:11:21.763 nvme0n1: ios=3634/3679, merge=0/0, ticks=44300/60239, in_queue=104539, util=86.77% 00:11:21.763 nvme0n2: ios=2070/2416, merge=0/0, ticks=21818/26453, in_queue=48271, util=95.43% 00:11:21.763 nvme0n3: ios=3615/3671, merge=0/0, ticks=44950/55728, in_queue=100678, util=99.38% 00:11:21.763 nvme0n4: ios=2067/2399, merge=0/0, ticks=20738/32642, in_queue=53380, util=95.91% 00:11:21.763 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:21.763 [global] 00:11:21.763 thread=1 00:11:21.763 invalidate=1 00:11:21.763 rw=randwrite 00:11:21.763 time_based=1 00:11:21.763 runtime=1 00:11:21.763 ioengine=libaio 00:11:21.763 direct=1 00:11:21.763 bs=4096 00:11:21.763 iodepth=128 00:11:21.763 norandommap=0 00:11:21.763 numjobs=1 00:11:21.763 00:11:21.763 verify_dump=1 00:11:21.763 verify_backlog=512 00:11:21.763 verify_state_save=0 00:11:21.763 do_verify=1 00:11:21.763 verify=crc32c-intel 00:11:21.763 [job0] 00:11:21.763 filename=/dev/nvme0n1 00:11:21.763 [job1] 00:11:21.763 filename=/dev/nvme0n2 00:11:21.763 [job2] 00:11:21.763 filename=/dev/nvme0n3 00:11:21.763 [job3] 00:11:21.763 filename=/dev/nvme0n4 00:11:21.763 Could not set queue depth (nvme0n1) 00:11:21.763 Could not set queue depth (nvme0n2) 00:11:21.763 Could not set queue depth (nvme0n3) 00:11:21.763 Could not set queue depth (nvme0n4) 00:11:22.020 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.020 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.020 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.020 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.020 fio-3.35 00:11:22.020 Starting 4 threads 00:11:23.393 00:11:23.393 job0: (groupid=0, jobs=1): err= 0: pid=3541109: Tue Dec 10 12:14:29 2024 00:11:23.393 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:11:23.393 slat (nsec): min=1465, max=23877k, avg=149265.86, stdev=1033007.03 00:11:23.393 clat (msec): min=6, max=108, avg=20.13, stdev=17.26 00:11:23.393 lat (msec): min=6, max=109, avg=20.28, stdev=17.36 00:11:23.393 clat percentiles (msec): 00:11:23.393 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:11:23.393 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 16], 00:11:23.393 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 46], 95.00th=[ 56], 00:11:23.393 | 99.00th=[ 92], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 109], 00:11:23.393 | 99.99th=[ 109] 00:11:23.393 write: IOPS=3571, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:11:23.393 slat (usec): min=2, max=15998, avg=124.31, stdev=651.31 00:11:23.393 clat (usec): min=1758, max=92044, avg=15119.32, stdev=10379.52 00:11:23.393 lat (usec): min=5840, max=92056, avg=15243.63, stdev=10472.70 00:11:23.393 clat percentiles (usec): 00:11:23.393 | 1.00th=[ 6915], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10552], 00:11:23.393 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[12387], 00:11:23.393 | 70.00th=[14746], 80.00th=[16188], 90.00th=[23987], 95.00th=[25297], 00:11:23.393 | 99.00th=[85459], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:11:23.393 | 99.99th=[91751] 00:11:23.393 bw ( KiB/s): min= 8192, max=20480, per=20.82%, avg=14336.00, stdev=8688.93, samples=2 00:11:23.393 iops : min= 2048, max= 5120, avg=3584.00, stdev=2172.23, samples=2 00:11:23.393 lat (msec) : 2=0.01%, 10=13.22%, 20=64.95%, 50=17.22%, 100=4.35% 00:11:23.393 lat (msec) : 250=0.25% 00:11:23.393 cpu : usr=2.89%, sys=4.28%, ctx=455, majf=0, minf=1 00:11:23.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:23.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.393 issued rwts: total=3584,3589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.393 job1: (groupid=0, jobs=1): err= 0: pid=3541112: Tue Dec 10 12:14:29 2024 00:11:23.393 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:11:23.393 slat (nsec): min=1376, max=22425k, avg=93474.82, stdev=602575.38 00:11:23.393 clat (usec): min=7261, max=59644, avg=11822.62, stdev=5684.53 00:11:23.393 lat (usec): min=7268, max=59660, avg=11916.10, stdev=5730.76 00:11:23.393 clat percentiles (usec): 00:11:23.393 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10028], 00:11:23.393 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:11:23.393 | 70.00th=[11338], 80.00th=[11731], 90.00th=[13173], 95.00th=[16581], 00:11:23.393 | 99.00th=[47449], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:11:23.393 | 99.99th=[59507] 00:11:23.393 write: IOPS=5164, BW=20.2MiB/s (21.2MB/s)(20.2MiB/1003msec); 0 zone resets 00:11:23.393 slat (usec): min=2, max=25110, avg=95.12, stdev=681.17 00:11:23.393 clat (usec): min=2309, max=58334, avg=12791.62, stdev=7390.04 00:11:23.393 lat (usec): min=2320, max=58349, avg=12886.74, stdev=7439.29 00:11:23.393 clat percentiles (usec): 00:11:23.393 | 1.00th=[ 5014], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10290], 00:11:23.393 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10683], 00:11:23.393 | 70.00th=[10814], 80.00th=[11076], 90.00th=[16909], 95.00th=[33817], 00:11:23.393 | 99.00th=[43254], 99.50th=[44303], 99.90th=[44303], 99.95th=[53740], 00:11:23.393 | 99.99th=[58459] 00:11:23.393 bw ( KiB/s): min=17296, max=23664, per=29.74%, avg=20480.00, stdev=4502.86, samples=2 00:11:23.393 iops : min= 4324, max= 5916, avg=5120.00, stdev=1125.71, samples=2 00:11:23.393 lat (msec) : 4=0.31%, 10=13.75%, 20=80.16%, 50=5.35%, 100=0.44% 00:11:23.393 cpu : usr=4.79%, sys=4.79%, ctx=495, majf=0, minf=2 00:11:23.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:23.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.393 issued rwts: total=5120,5180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.393 job2: (groupid=0, jobs=1): err= 0: pid=3541115: Tue Dec 10 12:14:29 2024 00:11:23.393 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:11:23.393 slat (nsec): min=1663, max=14458k, avg=122565.76, stdev=880026.69 00:11:23.393 clat (usec): min=6878, max=52842, avg=16376.46, stdev=6023.99 00:11:23.393 lat (usec): min=6885, max=52866, avg=16499.03, stdev=6075.99 00:11:23.393 clat percentiles (usec): 00:11:23.393 | 1.00th=[ 8094], 5.00th=[10552], 10.00th=[11600], 20.00th=[12256], 00:11:23.393 | 30.00th=[12780], 40.00th=[13829], 50.00th=[14222], 60.00th=[14877], 00:11:23.393 | 70.00th=[16319], 80.00th=[20841], 90.00th=[24511], 95.00th=[26608], 00:11:23.393 | 99.00th=[41157], 99.50th=[43779], 99.90th=[46400], 99.95th=[46400], 00:11:23.393 | 99.99th=[52691] 00:11:23.393 write: IOPS=3784, BW=14.8MiB/s (15.5MB/s)(15.0MiB/1012msec); 0 zone resets 00:11:23.393 slat (usec): min=2, max=13349, avg=130.41, stdev=770.64 00:11:23.393 clat (usec): min=1431, max=49419, avg=18009.38, stdev=8667.09 00:11:23.393 lat (usec): min=1439, max=49426, avg=18139.79, stdev=8713.89 00:11:23.393 clat percentiles (usec): 00:11:23.393 | 1.00th=[ 3589], 5.00th=[ 6194], 10.00th=[ 8586], 20.00th=[11994], 00:11:23.393 | 30.00th=[12780], 40.00th=[14222], 50.00th=[15664], 60.00th=[17695], 00:11:23.393 | 70.00th=[21103], 80.00th=[24773], 90.00th=[30016], 95.00th=[35914], 00:11:23.394 | 99.00th=[42206], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:11:23.394 | 99.99th=[49546] 00:11:23.394 bw ( KiB/s): min=13232, max=16384, per=21.51%, avg=14808.00, stdev=2228.80, samples=2 00:11:23.394 iops : min= 3308, max= 4096, avg=3702.00, stdev=557.20, samples=2 00:11:23.394 lat (msec) : 2=0.11%, 4=0.78%, 10=8.39%, 20=62.92%, 50=27.79% 00:11:23.394 lat (msec) : 100=0.01% 00:11:23.394 cpu : usr=3.17%, sys=4.75%, ctx=297, majf=0, minf=1 00:11:23.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:23.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.394 issued rwts: total=3584,3830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.394 job3: (groupid=0, jobs=1): err= 0: pid=3541116: Tue Dec 10 12:14:29 2024 00:11:23.394 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:11:23.394 slat (nsec): min=1063, max=14018k, avg=113113.82, stdev=771590.09 00:11:23.394 clat (usec): min=3491, max=38433, avg=14545.61, stdev=5773.03 00:11:23.394 lat (usec): min=3495, max=38440, avg=14658.73, stdev=5828.12 00:11:23.394 clat percentiles (usec): 00:11:23.394 | 1.00th=[ 4555], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[11207], 00:11:23.394 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[13173], 00:11:23.394 | 70.00th=[14615], 80.00th=[17957], 90.00th=[23725], 95.00th=[28181], 00:11:23.394 | 99.00th=[33424], 99.50th=[33817], 99.90th=[38536], 99.95th=[38536], 00:11:23.394 | 99.99th=[38536] 00:11:23.394 write: IOPS=4769, BW=18.6MiB/s (19.5MB/s)(18.8MiB/1011msec); 0 zone resets 00:11:23.394 slat (nsec): min=1927, max=9948.8k, avg=81971.72, stdev=474084.33 00:11:23.394 clat (usec): min=759, max=38002, avg=12709.07, stdev=6254.56 00:11:23.394 lat (usec): min=769, max=38008, avg=12791.04, stdev=6291.86 00:11:23.394 clat percentiles (usec): 00:11:23.394 | 1.00th=[ 3490], 5.00th=[ 5211], 10.00th=[ 7504], 20.00th=[ 8848], 00:11:23.394 | 30.00th=[ 9634], 40.00th=[10945], 50.00th=[11863], 60.00th=[12125], 00:11:23.394 | 70.00th=[12387], 80.00th=[12911], 90.00th=[23987], 95.00th=[26870], 00:11:23.394 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:11:23.394 | 99.99th=[38011] 00:11:23.394 bw ( KiB/s): min=16384, max=21176, per=27.27%, avg=18780.00, stdev=3388.46, samples=2 00:11:23.394 iops : min= 4096, max= 5294, avg=4695.00, stdev=847.11, samples=2 00:11:23.394 lat (usec) : 1000=0.07% 00:11:23.394 lat (msec) : 4=1.54%, 10=22.49%, 20=61.52%, 50=14.38% 00:11:23.394 cpu : usr=3.86%, sys=5.64%, ctx=445, majf=0, minf=2 00:11:23.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:23.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.394 issued rwts: total=4608,4822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.394 00:11:23.394 Run status group 0 (all jobs): 00:11:23.394 READ: bw=65.2MiB/s (68.4MB/s), 13.8MiB/s-19.9MiB/s (14.5MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1003-1012msec 00:11:23.394 WRITE: bw=67.2MiB/s (70.5MB/s), 13.9MiB/s-20.2MiB/s (14.6MB/s-21.2MB/s), io=68.1MiB (71.4MB), run=1003-1012msec 00:11:23.394 00:11:23.394 Disk stats (read/write): 00:11:23.394 nvme0n1: ios=3098/3302, merge=0/0, ticks=23489/21206, in_queue=44695, util=99.70% 00:11:23.394 nvme0n2: ios=4145/4443, merge=0/0, ticks=17692/18233, in_queue=35925, util=98.48% 00:11:23.394 nvme0n3: ios=3104/3311, merge=0/0, ticks=41630/42959, in_queue=84589, util=98.13% 00:11:23.394 nvme0n4: ios=3642/4096, merge=0/0, ticks=34014/33872, in_queue=67886, util=98.11% 00:11:23.394 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:23.394 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3541337 00:11:23.394 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:23.394 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:23.394 [global] 00:11:23.394 thread=1 00:11:23.394 invalidate=1 00:11:23.394 rw=read 00:11:23.394 time_based=1 00:11:23.394 runtime=10 00:11:23.394 ioengine=libaio 00:11:23.394 direct=1 00:11:23.394 bs=4096 00:11:23.394 iodepth=1 00:11:23.394 norandommap=1 00:11:23.394 numjobs=1 00:11:23.394 00:11:23.394 [job0] 00:11:23.394 filename=/dev/nvme0n1 00:11:23.394 [job1] 00:11:23.394 filename=/dev/nvme0n2 00:11:23.394 [job2] 00:11:23.394 filename=/dev/nvme0n3 00:11:23.394 [job3] 00:11:23.394 filename=/dev/nvme0n4 00:11:23.394 Could not set queue depth (nvme0n1) 00:11:23.394 Could not set queue depth (nvme0n2) 00:11:23.394 Could not set queue depth (nvme0n3) 00:11:23.394 Could not set queue depth (nvme0n4) 00:11:23.651 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.651 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.651 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.651 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.651 fio-3.35 00:11:23.651 Starting 4 threads 00:11:26.176 12:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:26.433 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43032576, buflen=4096 00:11:26.433 fio: pid=3541500, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.433 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:26.691 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.691 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:26.691 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=417792, buflen=4096 00:11:26.691 fio: pid=3541497, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.948 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=319488, buflen=4096 00:11:26.948 fio: pid=3541486, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.948 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.948 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:27.207 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=49147904, buflen=4096 00:11:27.207 fio: pid=3541490, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:27.207 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.207 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:27.207 00:11:27.207 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3541486: Tue Dec 10 12:14:34 2024 00:11:27.207 read: IOPS=24, BW=96.3KiB/s (98.6kB/s)(312KiB/3239msec) 00:11:27.207 slat (usec): min=10, max=5800, avg=95.98, stdev=650.00 00:11:27.207 clat (usec): min=392, max=87485, avg=41111.39, stdev=7037.74 00:11:27.207 lat (usec): min=422, max=87509, avg=41208.25, stdev=7078.95 00:11:27.207 clat percentiles (usec): 00:11:27.207 | 1.00th=[ 392], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:27.207 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:27.207 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:27.207 | 99.00th=[87557], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:11:27.207 | 99.99th=[87557] 00:11:27.207 bw ( KiB/s): min= 96, max= 104, per=0.38%, avg=98.00, stdev= 3.35, samples=6 00:11:27.207 iops : min= 24, max= 26, avg=24.50, stdev= 0.84, samples=6 00:11:27.207 lat (usec) : 500=1.27% 00:11:27.207 lat (msec) : 50=96.20%, 100=1.27% 00:11:27.207 cpu : usr=0.12%, sys=0.00%, ctx=82, majf=0, minf=2 00:11:27.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.207 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.207 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.207 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3541490: Tue Dec 10 12:14:34 2024 00:11:27.207 read: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(46.9MiB/3492msec) 00:11:27.207 slat (usec): min=4, max=15706, avg=10.52, stdev=207.50 00:11:27.207 clat (usec): min=190, max=106852, avg=278.68, stdev=1504.68 00:11:27.207 lat (usec): min=197, max=106861, avg=288.65, stdev=1518.11 00:11:27.207 clat percentiles (usec): 00:11:27.207 | 1.00th=[ 202], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 231], 00:11:27.207 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:11:27.207 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:11:27.207 | 99.00th=[ 453], 99.50th=[ 486], 99.90th=[ 1237], 99.95th=[ 1909], 00:11:27.207 | 99.99th=[104334] 00:11:27.207 bw ( KiB/s): min=14940, max=15584, per=58.56%, avg=15216.67, stdev=236.27, samples=6 00:11:27.207 iops : min= 3735, max= 3896, avg=3804.17, stdev=59.07, samples=6 00:11:27.207 lat (usec) : 250=58.27%, 500=41.34%, 750=0.21%, 1000=0.01% 00:11:27.207 lat (msec) : 2=0.12%, 50=0.03%, 250=0.02% 00:11:27.207 cpu : usr=2.09%, sys=5.10%, ctx=12003, majf=0, minf=2 00:11:27.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.207 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.207 issued rwts: total=12000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.207 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3541497: Tue Dec 10 12:14:34 2024 00:11:27.207 read: IOPS=34, BW=136KiB/s (140kB/s)(408KiB/2990msec) 00:11:27.207 slat (nsec): min=6904, max=59449, avg=19616.68, stdev=8437.21 00:11:27.207 clat (usec): min=220, max=42047, avg=29055.31, stdev=18687.82 00:11:27.207 lat (usec): min=228, max=42071, avg=29074.88, stdev=18694.73 00:11:27.207 clat percentiles (usec): 00:11:27.207 | 1.00th=[ 223], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 247], 00:11:27.207 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:27.207 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:27.208 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:27.208 | 99.99th=[42206] 00:11:27.208 bw ( KiB/s): min= 96, max= 328, per=0.55%, avg=144.00, stdev=102.92, samples=5 00:11:27.208 iops : min= 24, max= 82, avg=36.00, stdev=25.73, samples=5 00:11:27.208 lat (usec) : 250=23.30%, 500=5.83% 00:11:27.208 lat (msec) : 50=69.90% 00:11:27.208 cpu : usr=0.13%, sys=0.00%, ctx=107, majf=0, minf=1 00:11:27.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.208 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.208 issued rwts: total=103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.208 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3541500: Tue Dec 10 12:14:34 2024 00:11:27.208 read: IOPS=3833, BW=15.0MiB/s (15.7MB/s)(41.0MiB/2741msec) 00:11:27.208 slat (nsec): min=6558, max=32303, avg=7459.72, stdev=993.51 00:11:27.208 clat (usec): min=200, max=1090, avg=249.56, stdev=19.19 00:11:27.208 lat (usec): min=209, max=1097, avg=257.02, stdev=19.20 00:11:27.208 clat percentiles (usec): 00:11:27.208 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:11:27.208 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:11:27.208 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:11:27.208 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 441], 99.95th=[ 494], 00:11:27.208 | 99.99th=[ 881] 00:11:27.208 bw ( KiB/s): min=15408, max=15520, per=59.63%, avg=15496.00, stdev=49.32, samples=5 00:11:27.208 iops : min= 3852, max= 3880, avg=3874.00, stdev=12.33, samples=5 00:11:27.208 lat (usec) : 250=52.97%, 500=46.98%, 750=0.02%, 1000=0.01% 00:11:27.208 lat (msec) : 2=0.01% 00:11:27.208 cpu : usr=1.17%, sys=3.39%, ctx=10508, majf=0, minf=2 00:11:27.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.208 issued rwts: total=10507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.208 00:11:27.208 Run status group 0 (all jobs): 00:11:27.208 READ: bw=25.4MiB/s (26.6MB/s), 96.3KiB/s-15.0MiB/s (98.6kB/s-15.7MB/s), io=88.6MiB (92.9MB), run=2741-3492msec 00:11:27.208 00:11:27.208 Disk stats (read/write): 00:11:27.208 nvme0n1: ios=114/0, merge=0/0, ticks=4157/0, in_queue=4157, util=99.57% 00:11:27.208 nvme0n2: ios=11997/0, merge=0/0, ticks=3071/0, in_queue=3071, util=95.52% 00:11:27.208 nvme0n3: ios=149/0, merge=0/0, ticks=3972/0, in_queue=3972, util=99.29% 00:11:27.208 nvme0n4: ios=10115/0, merge=0/0, ticks=3333/0, in_queue=3333, util=99.70% 00:11:27.466 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.466 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:27.724 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.724 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:27.981 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.981 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:28.238 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.238 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:28.495 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:28.495 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3541337 00:11:28.495 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:28.495 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:29.867 nvmf hotplug test: fio failed as expected 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.867 rmmod nvme_tcp 00:11:29.867 rmmod nvme_fabrics 00:11:29.867 rmmod nvme_keyring 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3538467 ']' 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3538467 00:11:29.867 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3538467 ']' 00:11:29.868 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3538467 00:11:29.868 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:29.868 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.868 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3538467 00:11:30.125 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.125 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.125 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3538467' 00:11:30.125 killing process with pid 3538467 00:11:30.125 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3538467 00:11:30.125 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3538467 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.498 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.399 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.399 00:11:33.399 real 0m29.576s 00:11:33.399 user 2m0.093s 00:11:33.399 sys 0m7.921s 00:11:33.399 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.399 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.399 ************************************ 00:11:33.399 END TEST nvmf_fio_target 00:11:33.399 ************************************ 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.399 ************************************ 00:11:33.399 START TEST nvmf_bdevio 00:11:33.399 ************************************ 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:33.399 * Looking for test storage... 00:11:33.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.399 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.658 --rc genhtml_branch_coverage=1 00:11:33.658 --rc genhtml_function_coverage=1 00:11:33.658 --rc genhtml_legend=1 00:11:33.658 --rc geninfo_all_blocks=1 00:11:33.658 --rc geninfo_unexecuted_blocks=1 00:11:33.658 00:11:33.658 ' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.658 --rc genhtml_branch_coverage=1 00:11:33.658 --rc genhtml_function_coverage=1 00:11:33.658 --rc genhtml_legend=1 00:11:33.658 --rc geninfo_all_blocks=1 00:11:33.658 --rc geninfo_unexecuted_blocks=1 00:11:33.658 00:11:33.658 ' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.658 --rc genhtml_branch_coverage=1 00:11:33.658 --rc genhtml_function_coverage=1 00:11:33.658 --rc genhtml_legend=1 00:11:33.658 --rc geninfo_all_blocks=1 00:11:33.658 --rc geninfo_unexecuted_blocks=1 00:11:33.658 00:11:33.658 ' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.658 --rc genhtml_branch_coverage=1 00:11:33.658 --rc genhtml_function_coverage=1 00:11:33.658 --rc genhtml_legend=1 00:11:33.658 --rc geninfo_all_blocks=1 00:11:33.658 --rc geninfo_unexecuted_blocks=1 00:11:33.658 00:11:33.658 ' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.658 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:39.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:39.000 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.000 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:39.001 Found net devices under 0000:af:00.0: cvl_0_0 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:39.001 Found net devices under 0000:af:00.1: cvl_0_1 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:11:39.001 00:11:39.001 --- 10.0.0.2 ping statistics --- 00:11:39.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.001 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:39.001 00:11:39.001 --- 10.0.0.1 ping statistics --- 00:11:39.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.001 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3546101 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3546101 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3546101 ']' 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.001 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.001 [2024-12-10 12:14:45.738864] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:39.001 [2024-12-10 12:14:45.738951] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.258 [2024-12-10 12:14:45.857411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.258 [2024-12-10 12:14:45.968742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.258 [2024-12-10 12:14:45.968789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.258 [2024-12-10 12:14:45.968799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.258 [2024-12-10 12:14:45.968810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.258 [2024-12-10 12:14:45.968818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.258 [2024-12-10 12:14:45.971256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:39.258 [2024-12-10 12:14:45.971353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:39.258 [2024-12-10 12:14:45.971366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.258 [2024-12-10 12:14:45.971392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.822 [2024-12-10 12:14:46.588874] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.822 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.079 Malloc0 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.079 [2024-12-10 12:14:46.705266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:40.079 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:40.080 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:40.080 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:40.080 { 00:11:40.080 "params": { 00:11:40.080 "name": "Nvme$subsystem", 00:11:40.080 "trtype": "$TEST_TRANSPORT", 00:11:40.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.080 "adrfam": "ipv4", 00:11:40.080 "trsvcid": "$NVMF_PORT", 00:11:40.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.080 "hdgst": ${hdgst:-false}, 00:11:40.080 "ddgst": ${ddgst:-false} 00:11:40.080 }, 00:11:40.080 "method": "bdev_nvme_attach_controller" 00:11:40.080 } 00:11:40.080 EOF 00:11:40.080 )") 00:11:40.080 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:40.080 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:40.080 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:40.080 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:40.080 "params": { 00:11:40.080 "name": "Nvme1", 00:11:40.080 "trtype": "tcp", 00:11:40.080 "traddr": "10.0.0.2", 00:11:40.080 "adrfam": "ipv4", 00:11:40.080 "trsvcid": "4420", 00:11:40.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:40.080 "hdgst": false, 00:11:40.080 "ddgst": false 00:11:40.080 }, 00:11:40.080 "method": "bdev_nvme_attach_controller" 00:11:40.080 }' 00:11:40.080 [2024-12-10 12:14:46.783863] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:40.080 [2024-12-10 12:14:46.783946] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546350 ] 00:11:40.080 [2024-12-10 12:14:46.895907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.337 [2024-12-10 12:14:47.015312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.337 [2024-12-10 12:14:47.015388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.337 [2024-12-10 12:14:47.015394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.901 I/O targets: 00:11:40.901 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:40.901 00:11:40.901 00:11:40.901 CUnit - A unit testing framework for C - Version 2.1-3 00:11:40.901 http://cunit.sourceforge.net/ 00:11:40.901 00:11:40.901 00:11:40.901 Suite: bdevio tests on: Nvme1n1 00:11:40.901 Test: blockdev write read block ...passed 00:11:40.901 Test: blockdev write zeroes read block ...passed 00:11:40.901 Test: blockdev write zeroes read no split ...passed 00:11:40.901 Test: blockdev write zeroes read split ...passed 00:11:40.901 Test: blockdev write zeroes read split partial ...passed 00:11:40.901 Test: blockdev reset ...[2024-12-10 12:14:47.646895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:40.901 [2024-12-10 12:14:47.647013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:11:40.901 [2024-12-10 12:14:47.664886] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:40.901 passed 00:11:40.901 Test: blockdev write read 8 blocks ...passed 00:11:40.901 Test: blockdev write read size > 128k ...passed 00:11:40.901 Test: blockdev write read invalid size ...passed 00:11:41.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.159 Test: blockdev write read max offset ...passed 00:11:41.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.159 Test: blockdev writev readv 8 blocks ...passed 00:11:41.159 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.159 Test: blockdev writev readv block ...passed 00:11:41.159 Test: blockdev writev readv size > 128k ...passed 00:11:41.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.159 Test: blockdev comparev and writev ...[2024-12-10 12:14:47.881190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.159 [2024-12-10 12:14:47.881232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.881253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.159 [2024-12-10 12:14:47.881265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.881565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.159 [2024-12-10 12:14:47.881581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.881597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.159 [2024-12-10 12:14:47.881608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.881880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.159 [2024-12-10 12:14:47.881895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.881910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.159 [2024-12-10 12:14:47.881920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.882187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.159 [2024-12-10 12:14:47.882206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.882226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:41.159 [2024-12-10 12:14:47.882236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:41.159 passed 00:11:41.159 Test: blockdev nvme passthru rw ...passed 00:11:41.159 Test: blockdev nvme passthru vendor specific ...[2024-12-10 12:14:47.966577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:41.159 [2024-12-10 12:14:47.966611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.966749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:41.159 [2024-12-10 12:14:47.966762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.966893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:41.159 [2024-12-10 12:14:47.966906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:41.159 [2024-12-10 12:14:47.967026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:41.159 [2024-12-10 12:14:47.967039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:41.159 passed 00:11:41.159 Test: blockdev nvme admin passthru ...passed 00:11:41.416 Test: blockdev copy ...passed 00:11:41.416 00:11:41.416 Run Summary: Type Total Ran Passed Failed Inactive 00:11:41.416 suites 1 1 n/a 0 0 00:11:41.416 tests 23 23 23 0 0 00:11:41.417 asserts 152 152 152 0 n/a 00:11:41.417 00:11:41.417 Elapsed time = 1.250 seconds 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.348 rmmod nvme_tcp 00:11:42.348 rmmod nvme_fabrics 00:11:42.348 rmmod nvme_keyring 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3546101 ']' 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3546101 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3546101 ']' 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3546101 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.348 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546101 00:11:42.348 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:42.348 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:42.348 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546101' 00:11:42.348 killing process with pid 3546101 00:11:42.348 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3546101 00:11:42.348 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3546101 00:11:43.720 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.721 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.253 00:11:46.253 real 0m12.405s 00:11:46.253 user 0m22.658s 00:11:46.253 sys 0m4.778s 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:46.253 ************************************ 00:11:46.253 END TEST nvmf_bdevio 00:11:46.253 ************************************ 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:46.253 00:11:46.253 real 5m0.617s 00:11:46.253 user 12m0.521s 00:11:46.253 sys 1m34.982s 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:46.253 ************************************ 00:11:46.253 END TEST nvmf_target_core 00:11:46.253 ************************************ 00:11:46.253 12:14:52 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:46.253 12:14:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.253 12:14:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.253 12:14:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:46.253 ************************************ 00:11:46.253 START TEST nvmf_target_extra 00:11:46.253 ************************************ 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:46.253 * Looking for test storage... 00:11:46.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.253 --rc genhtml_branch_coverage=1 00:11:46.253 --rc genhtml_function_coverage=1 00:11:46.253 --rc genhtml_legend=1 00:11:46.253 --rc geninfo_all_blocks=1 00:11:46.253 --rc geninfo_unexecuted_blocks=1 00:11:46.253 00:11:46.253 ' 00:11:46.253 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.253 --rc genhtml_branch_coverage=1 00:11:46.253 --rc genhtml_function_coverage=1 00:11:46.253 --rc genhtml_legend=1 00:11:46.253 --rc geninfo_all_blocks=1 00:11:46.253 --rc geninfo_unexecuted_blocks=1 00:11:46.253 00:11:46.253 ' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.254 --rc genhtml_branch_coverage=1 00:11:46.254 --rc genhtml_function_coverage=1 00:11:46.254 --rc genhtml_legend=1 00:11:46.254 --rc geninfo_all_blocks=1 00:11:46.254 --rc geninfo_unexecuted_blocks=1 00:11:46.254 00:11:46.254 ' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.254 --rc genhtml_branch_coverage=1 00:11:46.254 --rc genhtml_function_coverage=1 00:11:46.254 --rc genhtml_legend=1 00:11:46.254 --rc geninfo_all_blocks=1 00:11:46.254 --rc geninfo_unexecuted_blocks=1 00:11:46.254 00:11:46.254 ' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.254 ************************************ 00:11:46.254 START TEST nvmf_example 00:11:46.254 ************************************ 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:46.254 * Looking for test storage... 00:11:46.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:46.254 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.255 --rc genhtml_branch_coverage=1 00:11:46.255 --rc genhtml_function_coverage=1 00:11:46.255 --rc genhtml_legend=1 00:11:46.255 --rc geninfo_all_blocks=1 00:11:46.255 --rc geninfo_unexecuted_blocks=1 00:11:46.255 00:11:46.255 ' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.255 --rc genhtml_branch_coverage=1 00:11:46.255 --rc genhtml_function_coverage=1 00:11:46.255 --rc genhtml_legend=1 00:11:46.255 --rc geninfo_all_blocks=1 00:11:46.255 --rc geninfo_unexecuted_blocks=1 00:11:46.255 00:11:46.255 ' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.255 --rc genhtml_branch_coverage=1 00:11:46.255 --rc genhtml_function_coverage=1 00:11:46.255 --rc genhtml_legend=1 00:11:46.255 --rc geninfo_all_blocks=1 00:11:46.255 --rc geninfo_unexecuted_blocks=1 00:11:46.255 00:11:46.255 ' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.255 --rc genhtml_branch_coverage=1 00:11:46.255 --rc genhtml_function_coverage=1 00:11:46.255 --rc genhtml_legend=1 00:11:46.255 --rc geninfo_all_blocks=1 00:11:46.255 --rc geninfo_unexecuted_blocks=1 00:11:46.255 00:11:46.255 ' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.255 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:51.519 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:51.519 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:51.519 Found net devices under 0000:af:00.0: cvl_0_0 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:51.519 Found net devices under 0000:af:00.1: cvl_0_1 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.519 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:11:51.778 00:11:51.778 --- 10.0.0.2 ping statistics --- 00:11:51.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.778 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:11:51.778 00:11:51.778 --- 10.0.0.1 ping statistics --- 00:11:51.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.778 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3550439 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3550439 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3550439 ']' 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.778 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.710 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:52.711 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:04.900 Initializing NVMe Controllers 00:12:04.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:04.900 Initialization complete. Launching workers. 00:12:04.900 ======================================================== 00:12:04.900 Latency(us) 00:12:04.900 Device Information : IOPS MiB/s Average min max 00:12:04.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16465.96 64.32 3886.61 824.04 19040.73 00:12:04.900 ======================================================== 00:12:04.900 Total : 16465.96 64.32 3886.61 824.04 19040.73 00:12:04.900 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.900 rmmod nvme_tcp 00:12:04.900 rmmod nvme_fabrics 00:12:04.900 rmmod nvme_keyring 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3550439 ']' 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3550439 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3550439 ']' 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3550439 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3550439 00:12:04.900 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:04.901 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:04.901 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3550439' 00:12:04.901 killing process with pid 3550439 00:12:04.901 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3550439 00:12:04.901 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3550439 00:12:04.901 nvmf threads initialize successfully 00:12:04.901 bdev subsystem init successfully 00:12:04.901 created a nvmf target service 00:12:04.901 create targets's poll groups done 00:12:04.901 all subsystems of target started 00:12:04.901 nvmf target is running 00:12:04.901 all subsystems of target stopped 00:12:04.901 destroy targets's poll groups done 00:12:04.901 destroyed the nvmf target service 00:12:04.901 bdev subsystem finish successfully 00:12:04.901 nvmf threads destroy successfully 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.901 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:06.804 00:12:06.804 real 0m20.547s 00:12:06.804 user 0m50.086s 00:12:06.804 sys 0m5.714s 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:06.804 ************************************ 00:12:06.804 END TEST nvmf_example 00:12:06.804 ************************************ 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.804 ************************************ 00:12:06.804 START TEST nvmf_filesystem 00:12:06.804 ************************************ 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:06.804 * Looking for test storage... 00:12:06.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.804 --rc genhtml_branch_coverage=1 00:12:06.804 --rc genhtml_function_coverage=1 00:12:06.804 --rc genhtml_legend=1 00:12:06.804 --rc geninfo_all_blocks=1 00:12:06.804 --rc geninfo_unexecuted_blocks=1 00:12:06.804 00:12:06.804 ' 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.804 --rc genhtml_branch_coverage=1 00:12:06.804 --rc genhtml_function_coverage=1 00:12:06.804 --rc genhtml_legend=1 00:12:06.804 --rc geninfo_all_blocks=1 00:12:06.804 --rc geninfo_unexecuted_blocks=1 00:12:06.804 00:12:06.804 ' 00:12:06.804 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.804 --rc genhtml_branch_coverage=1 00:12:06.804 --rc genhtml_function_coverage=1 00:12:06.804 --rc genhtml_legend=1 00:12:06.804 --rc geninfo_all_blocks=1 00:12:06.804 --rc geninfo_unexecuted_blocks=1 00:12:06.804 00:12:06.804 ' 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.805 --rc genhtml_branch_coverage=1 00:12:06.805 --rc genhtml_function_coverage=1 00:12:06.805 --rc genhtml_legend=1 00:12:06.805 --rc geninfo_all_blocks=1 00:12:06.805 --rc geninfo_unexecuted_blocks=1 00:12:06.805 00:12:06.805 ' 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:06.805 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:06.806 #define SPDK_CONFIG_H 00:12:06.806 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:06.806 #define SPDK_CONFIG_APPS 1 00:12:06.806 #define SPDK_CONFIG_ARCH native 00:12:06.806 #define SPDK_CONFIG_ASAN 1 00:12:06.806 #undef SPDK_CONFIG_AVAHI 00:12:06.806 #undef SPDK_CONFIG_CET 00:12:06.806 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:06.806 #define SPDK_CONFIG_COVERAGE 1 00:12:06.806 #define SPDK_CONFIG_CROSS_PREFIX 00:12:06.806 #undef SPDK_CONFIG_CRYPTO 00:12:06.806 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:06.806 #undef SPDK_CONFIG_CUSTOMOCF 00:12:06.806 #undef SPDK_CONFIG_DAOS 00:12:06.806 #define SPDK_CONFIG_DAOS_DIR 00:12:06.806 #define SPDK_CONFIG_DEBUG 1 00:12:06.806 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:06.806 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:06.806 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:06.806 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:06.806 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:06.806 #undef SPDK_CONFIG_DPDK_UADK 00:12:06.806 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:06.806 #define SPDK_CONFIG_EXAMPLES 1 00:12:06.806 #undef SPDK_CONFIG_FC 00:12:06.806 #define SPDK_CONFIG_FC_PATH 00:12:06.806 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:06.806 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:06.806 #define SPDK_CONFIG_FSDEV 1 00:12:06.806 #undef SPDK_CONFIG_FUSE 00:12:06.806 #undef SPDK_CONFIG_FUZZER 00:12:06.806 #define SPDK_CONFIG_FUZZER_LIB 00:12:06.806 #undef SPDK_CONFIG_GOLANG 00:12:06.806 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:06.806 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:06.806 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:06.806 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:06.806 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:06.806 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:06.806 #undef SPDK_CONFIG_HAVE_LZ4 00:12:06.806 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:06.806 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:06.806 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:06.806 #define SPDK_CONFIG_IDXD 1 00:12:06.806 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:06.806 #undef SPDK_CONFIG_IPSEC_MB 00:12:06.806 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:06.806 #define SPDK_CONFIG_ISAL 1 00:12:06.806 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:06.806 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:06.806 #define SPDK_CONFIG_LIBDIR 00:12:06.806 #undef SPDK_CONFIG_LTO 00:12:06.806 #define SPDK_CONFIG_MAX_LCORES 128 00:12:06.806 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:06.806 #define SPDK_CONFIG_NVME_CUSE 1 00:12:06.806 #undef SPDK_CONFIG_OCF 00:12:06.806 #define SPDK_CONFIG_OCF_PATH 00:12:06.806 #define SPDK_CONFIG_OPENSSL_PATH 00:12:06.806 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:06.806 #define SPDK_CONFIG_PGO_DIR 00:12:06.806 #undef SPDK_CONFIG_PGO_USE 00:12:06.806 #define SPDK_CONFIG_PREFIX /usr/local 00:12:06.806 #undef SPDK_CONFIG_RAID5F 00:12:06.806 #undef SPDK_CONFIG_RBD 00:12:06.806 #define SPDK_CONFIG_RDMA 1 00:12:06.806 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:06.806 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:06.806 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:06.806 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:06.806 #define SPDK_CONFIG_SHARED 1 00:12:06.806 #undef SPDK_CONFIG_SMA 00:12:06.806 #define SPDK_CONFIG_TESTS 1 00:12:06.806 #undef SPDK_CONFIG_TSAN 00:12:06.806 #define SPDK_CONFIG_UBLK 1 00:12:06.806 #define SPDK_CONFIG_UBSAN 1 00:12:06.806 #undef SPDK_CONFIG_UNIT_TESTS 00:12:06.806 #undef SPDK_CONFIG_URING 00:12:06.806 #define SPDK_CONFIG_URING_PATH 00:12:06.806 #undef SPDK_CONFIG_URING_ZNS 00:12:06.806 #undef SPDK_CONFIG_USDT 00:12:06.806 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:06.806 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:06.806 #undef SPDK_CONFIG_VFIO_USER 00:12:06.806 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:06.806 #define SPDK_CONFIG_VHOST 1 00:12:06.806 #define SPDK_CONFIG_VIRTIO 1 00:12:06.806 #undef SPDK_CONFIG_VTUNE 00:12:06.806 #define SPDK_CONFIG_VTUNE_DIR 00:12:06.806 #define SPDK_CONFIG_WERROR 1 00:12:06.806 #define SPDK_CONFIG_WPDK_DIR 00:12:06.806 #undef SPDK_CONFIG_XNVME 00:12:06.806 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:06.806 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:06.807 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:06.808 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3553517 ]] 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3553517 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:06.809 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.XmsF9H 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.XmsF9H/tests/target /tmp/spdk.XmsF9H 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88574898176 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837203968 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12262305792 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50407235584 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144431104 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=49344303104 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074298880 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:07.068 * Looking for test storage... 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88574898176 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=14476898304 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:07.068 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:07.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.069 --rc genhtml_branch_coverage=1 00:12:07.069 --rc genhtml_function_coverage=1 00:12:07.069 --rc genhtml_legend=1 00:12:07.069 --rc geninfo_all_blocks=1 00:12:07.069 --rc geninfo_unexecuted_blocks=1 00:12:07.069 00:12:07.069 ' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:07.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.069 --rc genhtml_branch_coverage=1 00:12:07.069 --rc genhtml_function_coverage=1 00:12:07.069 --rc genhtml_legend=1 00:12:07.069 --rc geninfo_all_blocks=1 00:12:07.069 --rc geninfo_unexecuted_blocks=1 00:12:07.069 00:12:07.069 ' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:07.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.069 --rc genhtml_branch_coverage=1 00:12:07.069 --rc genhtml_function_coverage=1 00:12:07.069 --rc genhtml_legend=1 00:12:07.069 --rc geninfo_all_blocks=1 00:12:07.069 --rc geninfo_unexecuted_blocks=1 00:12:07.069 00:12:07.069 ' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:07.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.069 --rc genhtml_branch_coverage=1 00:12:07.069 --rc genhtml_function_coverage=1 00:12:07.069 --rc genhtml_legend=1 00:12:07.069 --rc geninfo_all_blocks=1 00:12:07.069 --rc geninfo_unexecuted_blocks=1 00:12:07.069 00:12:07.069 ' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.069 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.070 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:12.332 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:12.332 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:12.332 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:12.333 Found net devices under 0000:af:00.0: cvl_0_0 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:12.333 Found net devices under 0000:af:00.1: cvl_0_1 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.333 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:12:12.592 00:12:12.592 --- 10.0.0.2 ping statistics --- 00:12:12.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.592 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:12.592 00:12:12.592 --- 10.0.0.1 ping statistics --- 00:12:12.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.592 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.592 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.851 ************************************ 00:12:12.851 START TEST nvmf_filesystem_no_in_capsule 00:12:12.851 ************************************ 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3556600 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3556600 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3556600 ']' 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.851 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.851 [2024-12-10 12:15:19.500787] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:12.851 [2024-12-10 12:15:19.500875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.851 [2024-12-10 12:15:19.617298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.109 [2024-12-10 12:15:19.721140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.109 [2024-12-10 12:15:19.721189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.109 [2024-12-10 12:15:19.721200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.109 [2024-12-10 12:15:19.721210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.109 [2024-12-10 12:15:19.721217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.109 [2024-12-10 12:15:19.723329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.109 [2024-12-10 12:15:19.723402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.109 [2024-12-10 12:15:19.723499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.109 [2024-12-10 12:15:19.723509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.673 [2024-12-10 12:15:20.368740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.673 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.238 Malloc1 00:12:14.238 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.238 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.238 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.238 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.238 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.239 [2024-12-10 12:15:20.977224] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.239 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.239 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.239 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:14.239 { 00:12:14.239 "name": "Malloc1", 00:12:14.239 "aliases": [ 00:12:14.239 "f5a25f8f-4f62-440e-a7a5-0eff2eca0605" 00:12:14.239 ], 00:12:14.239 "product_name": "Malloc disk", 00:12:14.239 "block_size": 512, 00:12:14.239 "num_blocks": 1048576, 00:12:14.239 "uuid": "f5a25f8f-4f62-440e-a7a5-0eff2eca0605", 00:12:14.239 "assigned_rate_limits": { 00:12:14.239 "rw_ios_per_sec": 0, 00:12:14.239 "rw_mbytes_per_sec": 0, 00:12:14.239 "r_mbytes_per_sec": 0, 00:12:14.239 "w_mbytes_per_sec": 0 00:12:14.239 }, 00:12:14.239 "claimed": true, 00:12:14.239 "claim_type": "exclusive_write", 00:12:14.239 "zoned": false, 00:12:14.239 "supported_io_types": { 00:12:14.239 "read": true, 00:12:14.239 "write": true, 00:12:14.239 "unmap": true, 00:12:14.239 "flush": true, 00:12:14.239 "reset": true, 00:12:14.239 "nvme_admin": false, 00:12:14.239 "nvme_io": false, 00:12:14.239 "nvme_io_md": false, 00:12:14.239 "write_zeroes": true, 00:12:14.239 "zcopy": true, 00:12:14.239 "get_zone_info": false, 00:12:14.239 "zone_management": false, 00:12:14.239 "zone_append": false, 00:12:14.239 "compare": false, 00:12:14.239 "compare_and_write": false, 00:12:14.239 "abort": true, 00:12:14.239 "seek_hole": false, 00:12:14.239 "seek_data": false, 00:12:14.239 "copy": true, 00:12:14.239 "nvme_iov_md": false 00:12:14.239 }, 00:12:14.239 "memory_domains": [ 00:12:14.239 { 00:12:14.239 "dma_device_id": "system", 00:12:14.239 "dma_device_type": 1 00:12:14.239 }, 00:12:14.239 { 00:12:14.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.239 "dma_device_type": 2 00:12:14.239 } 00:12:14.239 ], 00:12:14.239 "driver_specific": {} 00:12:14.239 } 00:12:14.239 ]' 00:12:14.239 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:14.239 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:14.239 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:14.497 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:14.497 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:14.497 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:14.497 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:14.497 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:17.953 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:18.527 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:19.502 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:19.502 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:19.502 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:19.502 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.502 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.502 ************************************ 00:12:19.502 START TEST filesystem_ext4 00:12:19.502 ************************************ 00:12:19.502 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:19.502 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:19.502 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.503 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:19.503 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:19.503 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:19.503 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:19.503 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:19.503 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:19.503 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:19.503 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:19.503 mke2fs 1.47.0 (5-Feb-2023) 00:12:19.760 Discarding device blocks: 0/522240 done 00:12:19.760 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:19.760 Filesystem UUID: 846004ee-f86d-4b18-ba84-99d737e7d2ba 00:12:19.760 Superblock backups stored on blocks: 00:12:19.760 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:19.760 00:12:19.760 Allocating group tables: 0/64 done 00:12:19.760 Writing inode tables: 0/64 done 00:12:19.760 Creating journal (8192 blocks): done 00:12:19.760 Writing superblocks and filesystem accounting information: 0/64 done 00:12:19.760 00:12:19.760 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:19.760 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3556600 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.309 00:12:26.309 real 0m6.287s 00:12:26.309 user 0m0.027s 00:12:26.309 sys 0m0.071s 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:26.309 ************************************ 00:12:26.309 END TEST filesystem_ext4 00:12:26.309 ************************************ 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.309 ************************************ 00:12:26.309 START TEST filesystem_btrfs 00:12:26.309 ************************************ 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:26.309 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:26.309 btrfs-progs v6.8.1 00:12:26.309 See https://btrfs.readthedocs.io for more information. 00:12:26.309 00:12:26.309 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:26.309 NOTE: several default settings have changed in version 5.15, please make sure 00:12:26.309 this does not affect your deployments: 00:12:26.310 - DUP for metadata (-m dup) 00:12:26.310 - enabled no-holes (-O no-holes) 00:12:26.310 - enabled free-space-tree (-R free-space-tree) 00:12:26.310 00:12:26.310 Label: (null) 00:12:26.310 UUID: b96f0f0a-24b7-45ea-b4c0-dfa9636a6242 00:12:26.310 Node size: 16384 00:12:26.310 Sector size: 4096 (CPU page size: 4096) 00:12:26.310 Filesystem size: 510.00MiB 00:12:26.310 Block group profiles: 00:12:26.310 Data: single 8.00MiB 00:12:26.310 Metadata: DUP 32.00MiB 00:12:26.310 System: DUP 8.00MiB 00:12:26.310 SSD detected: yes 00:12:26.310 Zoned device: no 00:12:26.310 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:26.310 Checksum: crc32c 00:12:26.310 Number of devices: 1 00:12:26.310 Devices: 00:12:26.310 ID SIZE PATH 00:12:26.310 1 510.00MiB /dev/nvme0n1p1 00:12:26.310 00:12:26.310 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:26.310 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3556600 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.310 00:12:26.310 real 0m0.467s 00:12:26.310 user 0m0.025s 00:12:26.310 sys 0m0.113s 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.310 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.310 ************************************ 00:12:26.310 END TEST filesystem_btrfs 00:12:26.310 ************************************ 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.568 ************************************ 00:12:26.568 START TEST filesystem_xfs 00:12:26.568 ************************************ 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:26.568 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:26.568 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:26.568 = sectsz=512 attr=2, projid32bit=1 00:12:26.568 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:26.568 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:26.568 data = bsize=4096 blocks=130560, imaxpct=25 00:12:26.568 = sunit=0 swidth=0 blks 00:12:26.568 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:26.568 log =internal log bsize=4096 blocks=16384, version=2 00:12:26.568 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:26.568 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:27.499 Discarding blocks...Done. 00:12:27.499 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:27.499 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3556600 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:30.022 00:12:30.022 real 0m3.115s 00:12:30.022 user 0m0.024s 00:12:30.022 sys 0m0.070s 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:30.022 ************************************ 00:12:30.022 END TEST filesystem_xfs 00:12:30.022 ************************************ 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3556600 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3556600 ']' 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3556600 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3556600 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3556600' 00:12:30.022 killing process with pid 3556600 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3556600 00:12:30.022 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3556600 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:32.546 00:12:32.546 real 0m19.807s 00:12:32.546 user 1m16.677s 00:12:32.546 sys 0m1.491s 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 ************************************ 00:12:32.546 END TEST nvmf_filesystem_no_in_capsule 00:12:32.546 ************************************ 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 ************************************ 00:12:32.546 START TEST nvmf_filesystem_in_capsule 00:12:32.546 ************************************ 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3559972 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3559972 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3559972 ']' 00:12:32.546 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.547 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.547 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.547 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.547 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.804 [2024-12-10 12:15:39.400150] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:32.804 [2024-12-10 12:15:39.400247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.804 [2024-12-10 12:15:39.518723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.804 [2024-12-10 12:15:39.628183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.805 [2024-12-10 12:15:39.628231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.805 [2024-12-10 12:15:39.628242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.805 [2024-12-10 12:15:39.628253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.805 [2024-12-10 12:15:39.628261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.062 [2024-12-10 12:15:39.630871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.062 [2024-12-10 12:15:39.630890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.062 [2024-12-10 12:15:39.630907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.062 [2024-12-10 12:15:39.630917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.626 [2024-12-10 12:15:40.254689] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.626 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.190 Malloc1 00:12:34.190 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.191 [2024-12-10 12:15:40.847586] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:34.191 { 00:12:34.191 "name": "Malloc1", 00:12:34.191 "aliases": [ 00:12:34.191 "8b508915-3aed-4006-b9d9-1e178e098f6b" 00:12:34.191 ], 00:12:34.191 "product_name": "Malloc disk", 00:12:34.191 "block_size": 512, 00:12:34.191 "num_blocks": 1048576, 00:12:34.191 "uuid": "8b508915-3aed-4006-b9d9-1e178e098f6b", 00:12:34.191 "assigned_rate_limits": { 00:12:34.191 "rw_ios_per_sec": 0, 00:12:34.191 "rw_mbytes_per_sec": 0, 00:12:34.191 "r_mbytes_per_sec": 0, 00:12:34.191 "w_mbytes_per_sec": 0 00:12:34.191 }, 00:12:34.191 "claimed": true, 00:12:34.191 "claim_type": "exclusive_write", 00:12:34.191 "zoned": false, 00:12:34.191 "supported_io_types": { 00:12:34.191 "read": true, 00:12:34.191 "write": true, 00:12:34.191 "unmap": true, 00:12:34.191 "flush": true, 00:12:34.191 "reset": true, 00:12:34.191 "nvme_admin": false, 00:12:34.191 "nvme_io": false, 00:12:34.191 "nvme_io_md": false, 00:12:34.191 "write_zeroes": true, 00:12:34.191 "zcopy": true, 00:12:34.191 "get_zone_info": false, 00:12:34.191 "zone_management": false, 00:12:34.191 "zone_append": false, 00:12:34.191 "compare": false, 00:12:34.191 "compare_and_write": false, 00:12:34.191 "abort": true, 00:12:34.191 "seek_hole": false, 00:12:34.191 "seek_data": false, 00:12:34.191 "copy": true, 00:12:34.191 "nvme_iov_md": false 00:12:34.191 }, 00:12:34.191 "memory_domains": [ 00:12:34.191 { 00:12:34.191 "dma_device_id": "system", 00:12:34.191 "dma_device_type": 1 00:12:34.191 }, 00:12:34.191 { 00:12:34.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.191 "dma_device_type": 2 00:12:34.191 } 00:12:34.191 ], 00:12:34.191 "driver_specific": {} 00:12:34.191 } 00:12:34.191 ]' 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:34.191 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.560 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.560 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:35.560 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.560 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:35.560 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:37.457 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:38.021 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:38.021 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.954 ************************************ 00:12:38.954 START TEST filesystem_in_capsule_ext4 00:12:38.954 ************************************ 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:38.954 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:38.954 mke2fs 1.47.0 (5-Feb-2023) 00:12:39.211 Discarding device blocks: 0/522240 done 00:12:39.211 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:39.211 Filesystem UUID: a610699c-8fe3-46bb-8de0-bc8f6e0106ee 00:12:39.211 Superblock backups stored on blocks: 00:12:39.211 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:39.211 00:12:39.211 Allocating group tables: 0/64 done 00:12:39.211 Writing inode tables: 0/64 done 00:12:42.484 Creating journal (8192 blocks): done 00:12:43.977 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:12:43.977 00:12:43.977 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:43.977 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3559972 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:50.527 00:12:50.527 real 0m10.549s 00:12:50.527 user 0m0.026s 00:12:50.527 sys 0m0.079s 00:12:50.527 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:50.528 ************************************ 00:12:50.528 END TEST filesystem_in_capsule_ext4 00:12:50.528 ************************************ 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.528 ************************************ 00:12:50.528 START TEST filesystem_in_capsule_btrfs 00:12:50.528 ************************************ 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:50.528 btrfs-progs v6.8.1 00:12:50.528 See https://btrfs.readthedocs.io for more information. 00:12:50.528 00:12:50.528 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:50.528 NOTE: several default settings have changed in version 5.15, please make sure 00:12:50.528 this does not affect your deployments: 00:12:50.528 - DUP for metadata (-m dup) 00:12:50.528 - enabled no-holes (-O no-holes) 00:12:50.528 - enabled free-space-tree (-R free-space-tree) 00:12:50.528 00:12:50.528 Label: (null) 00:12:50.528 UUID: eea395e3-bbc7-4f41-a169-924094634687 00:12:50.528 Node size: 16384 00:12:50.528 Sector size: 4096 (CPU page size: 4096) 00:12:50.528 Filesystem size: 510.00MiB 00:12:50.528 Block group profiles: 00:12:50.528 Data: single 8.00MiB 00:12:50.528 Metadata: DUP 32.00MiB 00:12:50.528 System: DUP 8.00MiB 00:12:50.528 SSD detected: yes 00:12:50.528 Zoned device: no 00:12:50.528 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:50.528 Checksum: crc32c 00:12:50.528 Number of devices: 1 00:12:50.528 Devices: 00:12:50.528 ID SIZE PATH 00:12:50.528 1 510.00MiB /dev/nvme0n1p1 00:12:50.528 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:50.528 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:50.786 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:50.786 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:50.786 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:50.786 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:50.786 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:50.786 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3559972 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:51.043 00:12:51.043 real 0m1.298s 00:12:51.043 user 0m0.024s 00:12:51.043 sys 0m0.117s 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:51.043 ************************************ 00:12:51.043 END TEST filesystem_in_capsule_btrfs 00:12:51.043 ************************************ 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:51.043 ************************************ 00:12:51.043 START TEST filesystem_in_capsule_xfs 00:12:51.043 ************************************ 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:51.043 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:51.044 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:51.044 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:51.044 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:51.044 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:51.044 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:51.044 = sectsz=512 attr=2, projid32bit=1 00:12:51.044 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:51.044 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:51.044 data = bsize=4096 blocks=130560, imaxpct=25 00:12:51.044 = sunit=0 swidth=0 blks 00:12:51.044 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:51.044 log =internal log bsize=4096 blocks=16384, version=2 00:12:51.044 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:51.044 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:51.974 Discarding blocks...Done. 00:12:51.974 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:51.974 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3559972 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:54.497 00:12:54.497 real 0m3.196s 00:12:54.497 user 0m0.023s 00:12:54.497 sys 0m0.074s 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:54.497 ************************************ 00:12:54.497 END TEST filesystem_in_capsule_xfs 00:12:54.497 ************************************ 00:12:54.497 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:54.497 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:54.497 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:54.754 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3559972 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3559972 ']' 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3559972 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559972 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559972' 00:12:54.755 killing process with pid 3559972 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3559972 00:12:54.755 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3559972 00:12:57.279 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:57.279 00:12:57.279 real 0m24.757s 00:12:57.279 user 1m36.193s 00:12:57.279 sys 0m1.671s 00:12:57.279 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.279 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.279 ************************************ 00:12:57.279 END TEST nvmf_filesystem_in_capsule 00:12:57.279 ************************************ 00:12:57.279 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:57.279 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.279 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.537 rmmod nvme_tcp 00:12:57.537 rmmod nvme_fabrics 00:12:57.537 rmmod nvme_keyring 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.537 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.436 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.436 00:12:59.436 real 0m52.851s 00:12:59.436 user 2m54.809s 00:12:59.436 sys 0m7.533s 00:12:59.436 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.436 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.436 ************************************ 00:12:59.436 END TEST nvmf_filesystem 00:12:59.436 ************************************ 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.694 ************************************ 00:12:59.694 START TEST nvmf_target_discovery 00:12:59.694 ************************************ 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:59.694 * Looking for test storage... 00:12:59.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:59.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.694 --rc genhtml_branch_coverage=1 00:12:59.694 --rc genhtml_function_coverage=1 00:12:59.694 --rc genhtml_legend=1 00:12:59.694 --rc geninfo_all_blocks=1 00:12:59.694 --rc geninfo_unexecuted_blocks=1 00:12:59.694 00:12:59.694 ' 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:59.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.694 --rc genhtml_branch_coverage=1 00:12:59.694 --rc genhtml_function_coverage=1 00:12:59.694 --rc genhtml_legend=1 00:12:59.694 --rc geninfo_all_blocks=1 00:12:59.694 --rc geninfo_unexecuted_blocks=1 00:12:59.694 00:12:59.694 ' 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:59.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.694 --rc genhtml_branch_coverage=1 00:12:59.694 --rc genhtml_function_coverage=1 00:12:59.694 --rc genhtml_legend=1 00:12:59.694 --rc geninfo_all_blocks=1 00:12:59.694 --rc geninfo_unexecuted_blocks=1 00:12:59.694 00:12:59.694 ' 00:12:59.694 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:59.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.694 --rc genhtml_branch_coverage=1 00:12:59.695 --rc genhtml_function_coverage=1 00:12:59.695 --rc genhtml_legend=1 00:12:59.695 --rc geninfo_all_blocks=1 00:12:59.695 --rc geninfo_unexecuted_blocks=1 00:12:59.695 00:12:59.695 ' 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.695 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.962 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.962 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.962 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.962 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:04.963 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:04.963 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:04.963 Found net devices under 0000:af:00.0: cvl_0_0 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:04.963 Found net devices under 0000:af:00.1: cvl_0_1 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.963 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:05.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:13:05.223 00:13:05.223 --- 10.0.0.2 ping statistics --- 00:13:05.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.223 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:13:05.223 00:13:05.223 --- 10.0.0.1 ping statistics --- 00:13:05.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.223 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3567682 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3567682 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3567682 ']' 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.223 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.223 [2024-12-10 12:16:12.039113] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:05.223 [2024-12-10 12:16:12.039227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.481 [2024-12-10 12:16:12.159830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.481 [2024-12-10 12:16:12.265356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.481 [2024-12-10 12:16:12.265402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.481 [2024-12-10 12:16:12.265413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.481 [2024-12-10 12:16:12.265423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.481 [2024-12-10 12:16:12.265431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.481 [2024-12-10 12:16:12.267898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.481 [2024-12-10 12:16:12.267983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.481 [2024-12-10 12:16:12.268059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.481 [2024-12-10 12:16:12.268067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.048 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.048 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:06.048 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.048 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.048 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.307 [2024-12-10 12:16:12.909894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.307 Null1 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.307 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 [2024-12-10 12:16:12.974745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 Null2 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 Null3 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 Null4 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.308 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:06.620 00:13:06.620 Discovery Log Number of Records 6, Generation counter 6 00:13:06.620 =====Discovery Log Entry 0====== 00:13:06.620 trtype: tcp 00:13:06.620 adrfam: ipv4 00:13:06.620 subtype: current discovery subsystem 00:13:06.620 treq: not required 00:13:06.620 portid: 0 00:13:06.620 trsvcid: 4420 00:13:06.620 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:06.620 traddr: 10.0.0.2 00:13:06.620 eflags: explicit discovery connections, duplicate discovery information 00:13:06.620 sectype: none 00:13:06.620 =====Discovery Log Entry 1====== 00:13:06.620 trtype: tcp 00:13:06.620 adrfam: ipv4 00:13:06.620 subtype: nvme subsystem 00:13:06.620 treq: not required 00:13:06.620 portid: 0 00:13:06.620 trsvcid: 4420 00:13:06.620 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:06.620 traddr: 10.0.0.2 00:13:06.620 eflags: none 00:13:06.620 sectype: none 00:13:06.620 =====Discovery Log Entry 2====== 00:13:06.620 trtype: tcp 00:13:06.620 adrfam: ipv4 00:13:06.620 subtype: nvme subsystem 00:13:06.620 treq: not required 00:13:06.620 portid: 0 00:13:06.620 trsvcid: 4420 00:13:06.620 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:06.620 traddr: 10.0.0.2 00:13:06.620 eflags: none 00:13:06.620 sectype: none 00:13:06.620 =====Discovery Log Entry 3====== 00:13:06.620 trtype: tcp 00:13:06.620 adrfam: ipv4 00:13:06.620 subtype: nvme subsystem 00:13:06.620 treq: not required 00:13:06.620 portid: 0 00:13:06.620 trsvcid: 4420 00:13:06.620 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:06.620 traddr: 10.0.0.2 00:13:06.620 eflags: none 00:13:06.620 sectype: none 00:13:06.620 =====Discovery Log Entry 4====== 00:13:06.620 trtype: tcp 00:13:06.620 adrfam: ipv4 00:13:06.620 subtype: nvme subsystem 00:13:06.620 treq: not required 00:13:06.620 portid: 0 00:13:06.620 trsvcid: 4420 00:13:06.620 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:06.620 traddr: 10.0.0.2 00:13:06.620 eflags: none 00:13:06.620 sectype: none 00:13:06.620 =====Discovery Log Entry 5====== 00:13:06.620 trtype: tcp 00:13:06.620 adrfam: ipv4 00:13:06.620 subtype: discovery subsystem referral 00:13:06.620 treq: not required 00:13:06.620 portid: 0 00:13:06.620 trsvcid: 4430 00:13:06.620 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:06.620 traddr: 10.0.0.2 00:13:06.620 eflags: none 00:13:06.620 sectype: none 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:06.620 Perform nvmf subsystem discovery via RPC 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.620 [ 00:13:06.620 { 00:13:06.620 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:06.620 "subtype": "Discovery", 00:13:06.620 "listen_addresses": [ 00:13:06.620 { 00:13:06.620 "trtype": "TCP", 00:13:06.620 "adrfam": "IPv4", 00:13:06.620 "traddr": "10.0.0.2", 00:13:06.620 "trsvcid": "4420" 00:13:06.620 } 00:13:06.620 ], 00:13:06.620 "allow_any_host": true, 00:13:06.620 "hosts": [] 00:13:06.620 }, 00:13:06.620 { 00:13:06.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.620 "subtype": "NVMe", 00:13:06.620 "listen_addresses": [ 00:13:06.620 { 00:13:06.620 "trtype": "TCP", 00:13:06.620 "adrfam": "IPv4", 00:13:06.620 "traddr": "10.0.0.2", 00:13:06.620 "trsvcid": "4420" 00:13:06.620 } 00:13:06.620 ], 00:13:06.620 "allow_any_host": true, 00:13:06.620 "hosts": [], 00:13:06.620 "serial_number": "SPDK00000000000001", 00:13:06.620 "model_number": "SPDK bdev Controller", 00:13:06.620 "max_namespaces": 32, 00:13:06.620 "min_cntlid": 1, 00:13:06.620 "max_cntlid": 65519, 00:13:06.620 "namespaces": [ 00:13:06.620 { 00:13:06.620 "nsid": 1, 00:13:06.620 "bdev_name": "Null1", 00:13:06.620 "name": "Null1", 00:13:06.620 "nguid": "DD342E706D254E56B7830F7C9C91E076", 00:13:06.620 "uuid": "dd342e70-6d25-4e56-b783-0f7c9c91e076" 00:13:06.620 } 00:13:06.620 ] 00:13:06.620 }, 00:13:06.620 { 00:13:06.620 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:06.620 "subtype": "NVMe", 00:13:06.620 "listen_addresses": [ 00:13:06.620 { 00:13:06.620 "trtype": "TCP", 00:13:06.620 "adrfam": "IPv4", 00:13:06.620 "traddr": "10.0.0.2", 00:13:06.620 "trsvcid": "4420" 00:13:06.620 } 00:13:06.620 ], 00:13:06.620 "allow_any_host": true, 00:13:06.620 "hosts": [], 00:13:06.620 "serial_number": "SPDK00000000000002", 00:13:06.620 "model_number": "SPDK bdev Controller", 00:13:06.620 "max_namespaces": 32, 00:13:06.620 "min_cntlid": 1, 00:13:06.620 "max_cntlid": 65519, 00:13:06.620 "namespaces": [ 00:13:06.620 { 00:13:06.620 "nsid": 1, 00:13:06.620 "bdev_name": "Null2", 00:13:06.620 "name": "Null2", 00:13:06.620 "nguid": "DBB5086756AC42D2BED530CD1C7439A6", 00:13:06.620 "uuid": "dbb50867-56ac-42d2-bed5-30cd1c7439a6" 00:13:06.620 } 00:13:06.620 ] 00:13:06.620 }, 00:13:06.620 { 00:13:06.620 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:06.620 "subtype": "NVMe", 00:13:06.620 "listen_addresses": [ 00:13:06.620 { 00:13:06.620 "trtype": "TCP", 00:13:06.620 "adrfam": "IPv4", 00:13:06.620 "traddr": "10.0.0.2", 00:13:06.620 "trsvcid": "4420" 00:13:06.620 } 00:13:06.620 ], 00:13:06.620 "allow_any_host": true, 00:13:06.620 "hosts": [], 00:13:06.620 "serial_number": "SPDK00000000000003", 00:13:06.620 "model_number": "SPDK bdev Controller", 00:13:06.620 "max_namespaces": 32, 00:13:06.620 "min_cntlid": 1, 00:13:06.620 "max_cntlid": 65519, 00:13:06.620 "namespaces": [ 00:13:06.620 { 00:13:06.620 "nsid": 1, 00:13:06.620 "bdev_name": "Null3", 00:13:06.620 "name": "Null3", 00:13:06.620 "nguid": "7F0CF863735F4C66B06CFEFF057998D8", 00:13:06.620 "uuid": "7f0cf863-735f-4c66-b06c-feff057998d8" 00:13:06.620 } 00:13:06.620 ] 00:13:06.620 }, 00:13:06.620 { 00:13:06.620 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:06.620 "subtype": "NVMe", 00:13:06.620 "listen_addresses": [ 00:13:06.620 { 00:13:06.620 "trtype": "TCP", 00:13:06.620 "adrfam": "IPv4", 00:13:06.620 "traddr": "10.0.0.2", 00:13:06.620 "trsvcid": "4420" 00:13:06.620 } 00:13:06.620 ], 00:13:06.620 "allow_any_host": true, 00:13:06.620 "hosts": [], 00:13:06.620 "serial_number": "SPDK00000000000004", 00:13:06.620 "model_number": "SPDK bdev Controller", 00:13:06.620 "max_namespaces": 32, 00:13:06.620 "min_cntlid": 1, 00:13:06.620 "max_cntlid": 65519, 00:13:06.620 "namespaces": [ 00:13:06.620 { 00:13:06.620 "nsid": 1, 00:13:06.620 "bdev_name": "Null4", 00:13:06.620 "name": "Null4", 00:13:06.620 "nguid": "FDEBBB9B5B0E406CA8D6B0609317A11F", 00:13:06.620 "uuid": "fdebbb9b-5b0e-406c-a8d6-b0609317a11f" 00:13:06.620 } 00:13:06.620 ] 00:13:06.620 } 00:13:06.620 ] 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.620 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.621 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.621 rmmod nvme_tcp 00:13:06.906 rmmod nvme_fabrics 00:13:06.906 rmmod nvme_keyring 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3567682 ']' 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3567682 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3567682 ']' 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3567682 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3567682 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3567682' 00:13:06.906 killing process with pid 3567682 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3567682 00:13:06.906 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3567682 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.289 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.191 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:10.191 00:13:10.191 real 0m10.451s 00:13:10.191 user 0m10.563s 00:13:10.191 sys 0m4.503s 00:13:10.191 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.191 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:10.191 ************************************ 00:13:10.191 END TEST nvmf_target_discovery 00:13:10.191 ************************************ 00:13:10.191 12:16:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:10.191 12:16:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.191 12:16:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.191 12:16:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:10.191 ************************************ 00:13:10.191 START TEST nvmf_referrals 00:13:10.191 ************************************ 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:10.192 * Looking for test storage... 00:13:10.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:10.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.192 --rc genhtml_branch_coverage=1 00:13:10.192 --rc genhtml_function_coverage=1 00:13:10.192 --rc genhtml_legend=1 00:13:10.192 --rc geninfo_all_blocks=1 00:13:10.192 --rc geninfo_unexecuted_blocks=1 00:13:10.192 00:13:10.192 ' 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:10.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.192 --rc genhtml_branch_coverage=1 00:13:10.192 --rc genhtml_function_coverage=1 00:13:10.192 --rc genhtml_legend=1 00:13:10.192 --rc geninfo_all_blocks=1 00:13:10.192 --rc geninfo_unexecuted_blocks=1 00:13:10.192 00:13:10.192 ' 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:10.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.192 --rc genhtml_branch_coverage=1 00:13:10.192 --rc genhtml_function_coverage=1 00:13:10.192 --rc genhtml_legend=1 00:13:10.192 --rc geninfo_all_blocks=1 00:13:10.192 --rc geninfo_unexecuted_blocks=1 00:13:10.192 00:13:10.192 ' 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:10.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.192 --rc genhtml_branch_coverage=1 00:13:10.192 --rc genhtml_function_coverage=1 00:13:10.192 --rc genhtml_legend=1 00:13:10.192 --rc geninfo_all_blocks=1 00:13:10.192 --rc geninfo_unexecuted_blocks=1 00:13:10.192 00:13:10.192 ' 00:13:10.192 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.192 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:10.451 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.718 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:15.719 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:15.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:15.719 Found net devices under 0000:af:00.0: cvl_0_0 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:15.719 Found net devices under 0000:af:00.1: cvl_0_1 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:13:15.719 00:13:15.719 --- 10.0.0.2 ping statistics --- 00:13:15.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.719 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:13:15.719 00:13:15.719 --- 10.0.0.1 ping statistics --- 00:13:15.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.719 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:15.719 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3571609 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3571609 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3571609 ']' 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.720 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.978 [2024-12-10 12:16:22.563892] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:15.978 [2024-12-10 12:16:22.563987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.978 [2024-12-10 12:16:22.683245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.978 [2024-12-10 12:16:22.792646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.978 [2024-12-10 12:16:22.792690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.978 [2024-12-10 12:16:22.792700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.978 [2024-12-10 12:16:22.792710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.978 [2024-12-10 12:16:22.792718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.978 [2024-12-10 12:16:22.795065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.978 [2024-12-10 12:16:22.795139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.978 [2024-12-10 12:16:22.795200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.978 [2024-12-10 12:16:22.795221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.544 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.544 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:16.544 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:16.544 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.802 [2024-12-10 12:16:23.411021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.802 [2024-12-10 12:16:23.440655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:16.802 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:16.803 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:17.061 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:17.319 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:17.319 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:17.319 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:17.319 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.319 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:17.319 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:17.576 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:17.576 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:17.576 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:17.576 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:17.576 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:17.576 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.576 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:17.833 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:18.091 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:18.091 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:18.091 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:18.091 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:18.091 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:18.091 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:18.091 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:18.348 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.605 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.605 rmmod nvme_tcp 00:13:18.605 rmmod nvme_fabrics 00:13:18.863 rmmod nvme_keyring 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3571609 ']' 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3571609 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3571609 ']' 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3571609 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3571609 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3571609' 00:13:18.863 killing process with pid 3571609 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3571609 00:13:18.863 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3571609 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.238 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.141 00:13:22.141 real 0m11.916s 00:13:22.141 user 0m17.118s 00:13:22.141 sys 0m4.897s 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:22.141 ************************************ 00:13:22.141 END TEST nvmf_referrals 00:13:22.141 ************************************ 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.141 ************************************ 00:13:22.141 START TEST nvmf_connect_disconnect 00:13:22.141 ************************************ 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:22.141 * Looking for test storage... 00:13:22.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.141 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:22.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.400 --rc genhtml_branch_coverage=1 00:13:22.400 --rc genhtml_function_coverage=1 00:13:22.400 --rc genhtml_legend=1 00:13:22.400 --rc geninfo_all_blocks=1 00:13:22.400 --rc geninfo_unexecuted_blocks=1 00:13:22.400 00:13:22.400 ' 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:22.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.400 --rc genhtml_branch_coverage=1 00:13:22.400 --rc genhtml_function_coverage=1 00:13:22.400 --rc genhtml_legend=1 00:13:22.400 --rc geninfo_all_blocks=1 00:13:22.400 --rc geninfo_unexecuted_blocks=1 00:13:22.400 00:13:22.400 ' 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:22.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.400 --rc genhtml_branch_coverage=1 00:13:22.400 --rc genhtml_function_coverage=1 00:13:22.400 --rc genhtml_legend=1 00:13:22.400 --rc geninfo_all_blocks=1 00:13:22.400 --rc geninfo_unexecuted_blocks=1 00:13:22.400 00:13:22.400 ' 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:22.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.400 --rc genhtml_branch_coverage=1 00:13:22.400 --rc genhtml_function_coverage=1 00:13:22.400 --rc genhtml_legend=1 00:13:22.400 --rc geninfo_all_blocks=1 00:13:22.400 --rc geninfo_unexecuted_blocks=1 00:13:22.400 00:13:22.400 ' 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.400 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:22.400 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:22.400 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.400 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.400 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.400 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.400 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.401 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:27.666 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:27.666 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:27.666 Found net devices under 0000:af:00.0: cvl_0_0 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:27.666 Found net devices under 0000:af:00.1: cvl_0_1 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.666 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:13:27.667 00:13:27.667 --- 10.0.0.2 ping statistics --- 00:13:27.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.667 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:13:27.667 00:13:27.667 --- 10.0.0.1 ping statistics --- 00:13:27.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.667 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3575634 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3575634 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3575634 ']' 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.667 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:27.667 [2024-12-10 12:16:34.432823] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:27.667 [2024-12-10 12:16:34.432913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.925 [2024-12-10 12:16:34.551039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.925 [2024-12-10 12:16:34.653917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.925 [2024-12-10 12:16:34.653963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.925 [2024-12-10 12:16:34.653972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.925 [2024-12-10 12:16:34.653982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.925 [2024-12-10 12:16:34.653991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.925 [2024-12-10 12:16:34.656422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.925 [2024-12-10 12:16:34.656495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.925 [2024-12-10 12:16:34.656556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.925 [2024-12-10 12:16:34.656566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.488 [2024-12-10 12:16:35.277196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.488 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.745 [2024-12-10 12:16:35.398960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:28.745 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:31.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.025 rmmod nvme_tcp 00:17:23.025 rmmod nvme_fabrics 00:17:23.025 rmmod nvme_keyring 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3575634 ']' 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3575634 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3575634 ']' 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3575634 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3575634 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3575634' 00:17:23.025 killing process with pid 3575634 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3575634 00:17:23.025 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3575634 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.402 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:26.937 00:17:26.937 real 4m4.381s 00:17:26.937 user 15m33.428s 00:17:26.937 sys 0m24.991s 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:26.937 ************************************ 00:17:26.937 END TEST nvmf_connect_disconnect 00:17:26.937 ************************************ 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.937 ************************************ 00:17:26.937 START TEST nvmf_multitarget 00:17:26.937 ************************************ 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:26.937 * Looking for test storage... 00:17:26.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.937 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.938 --rc genhtml_branch_coverage=1 00:17:26.938 --rc genhtml_function_coverage=1 00:17:26.938 --rc genhtml_legend=1 00:17:26.938 --rc geninfo_all_blocks=1 00:17:26.938 --rc geninfo_unexecuted_blocks=1 00:17:26.938 00:17:26.938 ' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.938 --rc genhtml_branch_coverage=1 00:17:26.938 --rc genhtml_function_coverage=1 00:17:26.938 --rc genhtml_legend=1 00:17:26.938 --rc geninfo_all_blocks=1 00:17:26.938 --rc geninfo_unexecuted_blocks=1 00:17:26.938 00:17:26.938 ' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.938 --rc genhtml_branch_coverage=1 00:17:26.938 --rc genhtml_function_coverage=1 00:17:26.938 --rc genhtml_legend=1 00:17:26.938 --rc geninfo_all_blocks=1 00:17:26.938 --rc geninfo_unexecuted_blocks=1 00:17:26.938 00:17:26.938 ' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.938 --rc genhtml_branch_coverage=1 00:17:26.938 --rc genhtml_function_coverage=1 00:17:26.938 --rc genhtml_legend=1 00:17:26.938 --rc geninfo_all_blocks=1 00:17:26.938 --rc geninfo_unexecuted_blocks=1 00:17:26.938 00:17:26.938 ' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:26.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:26.938 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:32.207 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.207 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.207 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.207 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.207 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.207 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.207 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:32.208 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:32.208 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:32.208 Found net devices under 0000:af:00.0: cvl_0_0 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:32.208 Found net devices under 0000:af:00.1: cvl_0_1 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:17:32.208 00:17:32.208 --- 10.0.0.2 ping statistics --- 00:17:32.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.208 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:17:32.208 00:17:32.208 --- 10.0.0.1 ping statistics --- 00:17:32.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.208 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.208 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3619149 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3619149 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3619149 ']' 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:32.209 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:32.209 [2024-12-10 12:20:38.970362] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:32.209 [2024-12-10 12:20:38.970451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.467 [2024-12-10 12:20:39.088633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.467 [2024-12-10 12:20:39.192089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.467 [2024-12-10 12:20:39.192130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.467 [2024-12-10 12:20:39.192141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.467 [2024-12-10 12:20:39.192151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.467 [2024-12-10 12:20:39.192160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.467 [2024-12-10 12:20:39.194335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.467 [2024-12-10 12:20:39.194411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.467 [2024-12-10 12:20:39.194431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.467 [2024-12-10 12:20:39.194424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:33.032 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:33.289 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:33.289 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:33.289 "nvmf_tgt_1" 00:17:33.289 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:33.547 "nvmf_tgt_2" 00:17:33.547 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:33.547 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:33.547 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:33.547 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:33.547 true 00:17:33.547 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:33.805 true 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.805 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.805 rmmod nvme_tcp 00:17:33.805 rmmod nvme_fabrics 00:17:33.805 rmmod nvme_keyring 00:17:34.063 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3619149 ']' 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3619149 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3619149 ']' 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3619149 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3619149 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3619149' 00:17:34.064 killing process with pid 3619149 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3619149 00:17:34.064 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3619149 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.440 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.343 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.343 00:17:37.343 real 0m10.644s 00:17:37.343 user 0m12.433s 00:17:37.343 sys 0m4.659s 00:17:37.343 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.343 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:37.343 ************************************ 00:17:37.343 END TEST nvmf_multitarget 00:17:37.343 ************************************ 00:17:37.343 12:20:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:37.343 12:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.343 12:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.343 12:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.343 ************************************ 00:17:37.343 START TEST nvmf_rpc 00:17:37.343 ************************************ 00:17:37.343 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:37.343 * Looking for test storage... 00:17:37.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.344 --rc genhtml_branch_coverage=1 00:17:37.344 --rc genhtml_function_coverage=1 00:17:37.344 --rc genhtml_legend=1 00:17:37.344 --rc geninfo_all_blocks=1 00:17:37.344 --rc geninfo_unexecuted_blocks=1 00:17:37.344 00:17:37.344 ' 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.344 --rc genhtml_branch_coverage=1 00:17:37.344 --rc genhtml_function_coverage=1 00:17:37.344 --rc genhtml_legend=1 00:17:37.344 --rc geninfo_all_blocks=1 00:17:37.344 --rc geninfo_unexecuted_blocks=1 00:17:37.344 00:17:37.344 ' 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.344 --rc genhtml_branch_coverage=1 00:17:37.344 --rc genhtml_function_coverage=1 00:17:37.344 --rc genhtml_legend=1 00:17:37.344 --rc geninfo_all_blocks=1 00:17:37.344 --rc geninfo_unexecuted_blocks=1 00:17:37.344 00:17:37.344 ' 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.344 --rc genhtml_branch_coverage=1 00:17:37.344 --rc genhtml_function_coverage=1 00:17:37.344 --rc genhtml_legend=1 00:17:37.344 --rc geninfo_all_blocks=1 00:17:37.344 --rc geninfo_unexecuted_blocks=1 00:17:37.344 00:17:37.344 ' 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.344 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.603 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.604 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.604 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.604 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.604 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.604 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.604 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.604 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.872 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:43.212 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:43.212 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:43.212 Found net devices under 0000:af:00.0: cvl_0_0 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:43.212 Found net devices under 0000:af:00.1: cvl_0_1 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:17:43.212 00:17:43.212 --- 10.0.0.2 ping statistics --- 00:17:43.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.212 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:17:43.212 00:17:43.212 --- 10.0.0.1 ping statistics --- 00:17:43.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.212 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.212 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3623095 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3623095 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3623095 ']' 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.502 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.503 [2024-12-10 12:20:50.094242] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:43.503 [2024-12-10 12:20:50.094334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.503 [2024-12-10 12:20:50.213130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.761 [2024-12-10 12:20:50.325525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.761 [2024-12-10 12:20:50.325567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.761 [2024-12-10 12:20:50.325578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.761 [2024-12-10 12:20:50.325588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.761 [2024-12-10 12:20:50.325597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.761 [2024-12-10 12:20:50.327851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.761 [2024-12-10 12:20:50.327924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.761 [2024-12-10 12:20:50.327985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.761 [2024-12-10 12:20:50.327994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:44.329 "tick_rate": 2100000000, 00:17:44.329 "poll_groups": [ 00:17:44.329 { 00:17:44.329 "name": "nvmf_tgt_poll_group_000", 00:17:44.329 "admin_qpairs": 0, 00:17:44.329 "io_qpairs": 0, 00:17:44.329 "current_admin_qpairs": 0, 00:17:44.329 "current_io_qpairs": 0, 00:17:44.329 "pending_bdev_io": 0, 00:17:44.329 "completed_nvme_io": 0, 00:17:44.329 "transports": [] 00:17:44.329 }, 00:17:44.329 { 00:17:44.329 "name": "nvmf_tgt_poll_group_001", 00:17:44.329 "admin_qpairs": 0, 00:17:44.329 "io_qpairs": 0, 00:17:44.329 "current_admin_qpairs": 0, 00:17:44.329 "current_io_qpairs": 0, 00:17:44.329 "pending_bdev_io": 0, 00:17:44.329 "completed_nvme_io": 0, 00:17:44.329 "transports": [] 00:17:44.329 }, 00:17:44.329 { 00:17:44.329 "name": "nvmf_tgt_poll_group_002", 00:17:44.329 "admin_qpairs": 0, 00:17:44.329 "io_qpairs": 0, 00:17:44.329 "current_admin_qpairs": 0, 00:17:44.329 "current_io_qpairs": 0, 00:17:44.329 "pending_bdev_io": 0, 00:17:44.329 "completed_nvme_io": 0, 00:17:44.329 "transports": [] 00:17:44.329 }, 00:17:44.329 { 00:17:44.329 "name": "nvmf_tgt_poll_group_003", 00:17:44.329 "admin_qpairs": 0, 00:17:44.329 "io_qpairs": 0, 00:17:44.329 "current_admin_qpairs": 0, 00:17:44.329 "current_io_qpairs": 0, 00:17:44.329 "pending_bdev_io": 0, 00:17:44.329 "completed_nvme_io": 0, 00:17:44.329 "transports": [] 00:17:44.329 } 00:17:44.329 ] 00:17:44.329 }' 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:44.329 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:44.329 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:44.329 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:44.329 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:44.329 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.329 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.329 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.329 [2024-12-10 12:20:51.063974] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.329 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.329 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:44.330 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.330 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.330 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.330 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:44.330 "tick_rate": 2100000000, 00:17:44.330 "poll_groups": [ 00:17:44.330 { 00:17:44.330 "name": "nvmf_tgt_poll_group_000", 00:17:44.330 "admin_qpairs": 0, 00:17:44.330 "io_qpairs": 0, 00:17:44.330 "current_admin_qpairs": 0, 00:17:44.330 "current_io_qpairs": 0, 00:17:44.330 "pending_bdev_io": 0, 00:17:44.330 "completed_nvme_io": 0, 00:17:44.330 "transports": [ 00:17:44.330 { 00:17:44.330 "trtype": "TCP" 00:17:44.330 } 00:17:44.330 ] 00:17:44.330 }, 00:17:44.330 { 00:17:44.330 "name": "nvmf_tgt_poll_group_001", 00:17:44.330 "admin_qpairs": 0, 00:17:44.330 "io_qpairs": 0, 00:17:44.330 "current_admin_qpairs": 0, 00:17:44.330 "current_io_qpairs": 0, 00:17:44.330 "pending_bdev_io": 0, 00:17:44.330 "completed_nvme_io": 0, 00:17:44.330 "transports": [ 00:17:44.330 { 00:17:44.330 "trtype": "TCP" 00:17:44.330 } 00:17:44.330 ] 00:17:44.330 }, 00:17:44.330 { 00:17:44.330 "name": "nvmf_tgt_poll_group_002", 00:17:44.330 "admin_qpairs": 0, 00:17:44.330 "io_qpairs": 0, 00:17:44.330 "current_admin_qpairs": 0, 00:17:44.330 "current_io_qpairs": 0, 00:17:44.330 "pending_bdev_io": 0, 00:17:44.330 "completed_nvme_io": 0, 00:17:44.330 "transports": [ 00:17:44.330 { 00:17:44.330 "trtype": "TCP" 00:17:44.330 } 00:17:44.330 ] 00:17:44.330 }, 00:17:44.330 { 00:17:44.330 "name": "nvmf_tgt_poll_group_003", 00:17:44.330 "admin_qpairs": 0, 00:17:44.330 "io_qpairs": 0, 00:17:44.330 "current_admin_qpairs": 0, 00:17:44.330 "current_io_qpairs": 0, 00:17:44.330 "pending_bdev_io": 0, 00:17:44.330 "completed_nvme_io": 0, 00:17:44.330 "transports": [ 00:17:44.330 { 00:17:44.330 "trtype": "TCP" 00:17:44.330 } 00:17:44.330 ] 00:17:44.330 } 00:17:44.330 ] 00:17:44.330 }' 00:17:44.330 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:44.330 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:44.330 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:44.330 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 Malloc1 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.589 [2024-12-10 12:20:51.303379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:44.589 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:44.590 [2024-12-10 12:20:51.332656] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:17:44.590 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:44.590 could not add new controller: failed to write to nvme-fabrics device 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.590 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:45.968 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:45.968 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:45.968 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.968 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:45.968 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:47.871 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:47.871 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:47.871 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.871 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:47.871 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.871 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:47.872 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:48.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.131 [2024-12-10 12:20:54.883306] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:17:48.131 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:48.131 could not add new controller: failed to write to nvme-fabrics device 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.131 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.506 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.506 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:49.506 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.506 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:49.506 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:51.436 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:51.436 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:51.436 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.436 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:51.436 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.436 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:51.436 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.695 [2024-12-10 12:20:58.423939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.695 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:53.072 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:53.072 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:53.072 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.072 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:53.072 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.976 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.976 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.976 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.976 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.976 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.976 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:54.976 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:55.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.235 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.236 [2024-12-10 12:21:01.967783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.236 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:56.612 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.612 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:56.612 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.612 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:56.612 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:58.513 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:58.513 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:58.513 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.513 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:58.513 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.513 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:58.513 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.771 [2024-12-10 12:21:05.441337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.771 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.772 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.772 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.772 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.772 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:00.147 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:00.148 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:00.148 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:00.148 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:00.148 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:02.052 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:02.052 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:02.052 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.052 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:02.052 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.052 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:02.052 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.312 [2024-12-10 12:21:08.966353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:03.689 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:03.689 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:03.689 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.690 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:03.690 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:05.591 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:05.591 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:05.591 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.591 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:05.591 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.591 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:05.591 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.850 [2024-12-10 12:21:12.534784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.850 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:07.225 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:07.225 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:07.225 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.225 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:07.225 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:09.126 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:09.126 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:09.126 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.126 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:09.126 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.126 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:09.126 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.384 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 [2024-12-10 12:21:16.021410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 [2024-12-10 12:21:16.069522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 [2024-12-10 12:21:16.117662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 [2024-12-10 12:21:16.165827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.385 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.386 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 [2024-12-10 12:21:16.214010] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:09.644 "tick_rate": 2100000000, 00:18:09.644 "poll_groups": [ 00:18:09.644 { 00:18:09.644 "name": "nvmf_tgt_poll_group_000", 00:18:09.644 "admin_qpairs": 2, 00:18:09.644 "io_qpairs": 168, 00:18:09.644 "current_admin_qpairs": 0, 00:18:09.644 "current_io_qpairs": 0, 00:18:09.644 "pending_bdev_io": 0, 00:18:09.644 "completed_nvme_io": 224, 00:18:09.644 "transports": [ 00:18:09.644 { 00:18:09.644 "trtype": "TCP" 00:18:09.644 } 00:18:09.644 ] 00:18:09.644 }, 00:18:09.644 { 00:18:09.644 "name": "nvmf_tgt_poll_group_001", 00:18:09.644 "admin_qpairs": 2, 00:18:09.644 "io_qpairs": 168, 00:18:09.644 "current_admin_qpairs": 0, 00:18:09.644 "current_io_qpairs": 0, 00:18:09.644 "pending_bdev_io": 0, 00:18:09.644 "completed_nvme_io": 302, 00:18:09.644 "transports": [ 00:18:09.644 { 00:18:09.644 "trtype": "TCP" 00:18:09.644 } 00:18:09.644 ] 00:18:09.644 }, 00:18:09.644 { 00:18:09.644 "name": "nvmf_tgt_poll_group_002", 00:18:09.644 "admin_qpairs": 1, 00:18:09.644 "io_qpairs": 168, 00:18:09.644 "current_admin_qpairs": 0, 00:18:09.644 "current_io_qpairs": 0, 00:18:09.644 "pending_bdev_io": 0, 00:18:09.644 "completed_nvme_io": 234, 00:18:09.644 "transports": [ 00:18:09.644 { 00:18:09.644 "trtype": "TCP" 00:18:09.644 } 00:18:09.644 ] 00:18:09.644 }, 00:18:09.644 { 00:18:09.644 "name": "nvmf_tgt_poll_group_003", 00:18:09.644 "admin_qpairs": 2, 00:18:09.644 "io_qpairs": 168, 00:18:09.644 "current_admin_qpairs": 0, 00:18:09.644 "current_io_qpairs": 0, 00:18:09.644 "pending_bdev_io": 0, 00:18:09.644 "completed_nvme_io": 262, 00:18:09.644 "transports": [ 00:18:09.644 { 00:18:09.644 "trtype": "TCP" 00:18:09.644 } 00:18:09.644 ] 00:18:09.644 } 00:18:09.644 ] 00:18:09.644 }' 00:18:09.644 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.645 rmmod nvme_tcp 00:18:09.645 rmmod nvme_fabrics 00:18:09.645 rmmod nvme_keyring 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3623095 ']' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3623095 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3623095 ']' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3623095 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.645 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3623095 00:18:09.903 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.903 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.903 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3623095' 00:18:09.903 killing process with pid 3623095 00:18:09.903 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3623095 00:18:09.903 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3623095 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.280 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.184 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:13.184 00:18:13.184 real 0m35.927s 00:18:13.184 user 1m49.825s 00:18:13.184 sys 0m6.584s 00:18:13.185 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.185 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.185 ************************************ 00:18:13.185 END TEST nvmf_rpc 00:18:13.185 ************************************ 00:18:13.185 12:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:13.185 12:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.185 12:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.185 12:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.185 ************************************ 00:18:13.185 START TEST nvmf_invalid 00:18:13.185 ************************************ 00:18:13.185 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:13.444 * Looking for test storage... 00:18:13.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.444 --rc genhtml_branch_coverage=1 00:18:13.444 --rc genhtml_function_coverage=1 00:18:13.444 --rc genhtml_legend=1 00:18:13.444 --rc geninfo_all_blocks=1 00:18:13.444 --rc geninfo_unexecuted_blocks=1 00:18:13.444 00:18:13.444 ' 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.444 --rc genhtml_branch_coverage=1 00:18:13.444 --rc genhtml_function_coverage=1 00:18:13.444 --rc genhtml_legend=1 00:18:13.444 --rc geninfo_all_blocks=1 00:18:13.444 --rc geninfo_unexecuted_blocks=1 00:18:13.444 00:18:13.444 ' 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.444 --rc genhtml_branch_coverage=1 00:18:13.444 --rc genhtml_function_coverage=1 00:18:13.444 --rc genhtml_legend=1 00:18:13.444 --rc geninfo_all_blocks=1 00:18:13.444 --rc geninfo_unexecuted_blocks=1 00:18:13.444 00:18:13.444 ' 00:18:13.444 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.444 --rc genhtml_branch_coverage=1 00:18:13.444 --rc genhtml_function_coverage=1 00:18:13.444 --rc genhtml_legend=1 00:18:13.445 --rc geninfo_all_blocks=1 00:18:13.445 --rc geninfo_unexecuted_blocks=1 00:18:13.445 00:18:13.445 ' 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:13.445 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:18.713 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:18.713 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:18.714 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:18.714 Found net devices under 0000:af:00.0: cvl_0_0 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:18.714 Found net devices under 0000:af:00.1: cvl_0_1 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:18.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:18:18.714 00:18:18.714 --- 10.0.0.2 ping statistics --- 00:18:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.714 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:18:18.714 00:18:18.714 --- 10.0.0.1 ping statistics --- 00:18:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.714 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3631481 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3631481 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3631481 ']' 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:18.714 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:18.714 [2024-12-10 12:21:24.982307] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:18.714 [2024-12-10 12:21:24.982394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.714 [2024-12-10 12:21:25.099565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:18.714 [2024-12-10 12:21:25.203448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.714 [2024-12-10 12:21:25.203491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.714 [2024-12-10 12:21:25.203501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.714 [2024-12-10 12:21:25.203510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.714 [2024-12-10 12:21:25.203518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.714 [2024-12-10 12:21:25.205614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.714 [2024-12-10 12:21:25.205690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.714 [2024-12-10 12:21:25.205797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.714 [2024-12-10 12:21:25.205806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.281 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.281 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:19.281 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.281 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.281 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:19.281 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.281 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:19.281 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13001 00:18:19.281 [2024-12-10 12:21:26.013289] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:19.281 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:19.281 { 00:18:19.281 "nqn": "nqn.2016-06.io.spdk:cnode13001", 00:18:19.281 "tgt_name": "foobar", 00:18:19.281 "method": "nvmf_create_subsystem", 00:18:19.281 "req_id": 1 00:18:19.281 } 00:18:19.281 Got JSON-RPC error response 00:18:19.281 response: 00:18:19.281 { 00:18:19.281 "code": -32603, 00:18:19.281 "message": "Unable to find target foobar" 00:18:19.281 }' 00:18:19.281 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:19.281 { 00:18:19.281 "nqn": "nqn.2016-06.io.spdk:cnode13001", 00:18:19.281 "tgt_name": "foobar", 00:18:19.281 "method": "nvmf_create_subsystem", 00:18:19.281 "req_id": 1 00:18:19.281 } 00:18:19.281 Got JSON-RPC error response 00:18:19.281 response: 00:18:19.281 { 00:18:19.281 "code": -32603, 00:18:19.281 "message": "Unable to find target foobar" 00:18:19.281 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:19.281 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:19.281 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18987 00:18:19.539 [2024-12-10 12:21:26.213966] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18987: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:19.539 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:19.539 { 00:18:19.539 "nqn": "nqn.2016-06.io.spdk:cnode18987", 00:18:19.539 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:19.539 "method": "nvmf_create_subsystem", 00:18:19.539 "req_id": 1 00:18:19.539 } 00:18:19.539 Got JSON-RPC error response 00:18:19.539 response: 00:18:19.539 { 00:18:19.539 "code": -32602, 00:18:19.539 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:19.540 }' 00:18:19.540 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:19.540 { 00:18:19.540 "nqn": "nqn.2016-06.io.spdk:cnode18987", 00:18:19.540 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:19.540 "method": "nvmf_create_subsystem", 00:18:19.540 "req_id": 1 00:18:19.540 } 00:18:19.540 Got JSON-RPC error response 00:18:19.540 response: 00:18:19.540 { 00:18:19.540 "code": -32602, 00:18:19.540 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:19.540 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:19.540 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:19.540 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1179 00:18:19.799 [2024-12-10 12:21:26.414653] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1179: invalid model number 'SPDK_Controller' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:19.799 { 00:18:19.799 "nqn": "nqn.2016-06.io.spdk:cnode1179", 00:18:19.799 "model_number": "SPDK_Controller\u001f", 00:18:19.799 "method": "nvmf_create_subsystem", 00:18:19.799 "req_id": 1 00:18:19.799 } 00:18:19.799 Got JSON-RPC error response 00:18:19.799 response: 00:18:19.799 { 00:18:19.799 "code": -32602, 00:18:19.799 "message": "Invalid MN SPDK_Controller\u001f" 00:18:19.799 }' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:19.799 { 00:18:19.799 "nqn": "nqn.2016-06.io.spdk:cnode1179", 00:18:19.799 "model_number": "SPDK_Controller\u001f", 00:18:19.799 "method": "nvmf_create_subsystem", 00:18:19.799 "req_id": 1 00:18:19.799 } 00:18:19.799 Got JSON-RPC error response 00:18:19.799 response: 00:18:19.799 { 00:18:19.799 "code": -32602, 00:18:19.799 "message": "Invalid MN SPDK_Controller\u001f" 00:18:19.799 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.799 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!'\''[t!N;BB/7t{0Qum"ja7' 00:18:19.800 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '!'\''[t!N;BB/7t{0Qum"ja7' nqn.2016-06.io.spdk:cnode24710 00:18:20.059 [2024-12-10 12:21:26.751794] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24710: invalid serial number '!'[t!N;BB/7t{0Qum"ja7' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:20.059 { 00:18:20.059 "nqn": "nqn.2016-06.io.spdk:cnode24710", 00:18:20.059 "serial_number": "!'\''[t!N;BB/7t{0Qum\"ja7", 00:18:20.059 "method": "nvmf_create_subsystem", 00:18:20.059 "req_id": 1 00:18:20.059 } 00:18:20.059 Got JSON-RPC error response 00:18:20.059 response: 00:18:20.059 { 00:18:20.059 "code": -32602, 00:18:20.059 "message": "Invalid SN !'\''[t!N;BB/7t{0Qum\"ja7" 00:18:20.059 }' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:20.059 { 00:18:20.059 "nqn": "nqn.2016-06.io.spdk:cnode24710", 00:18:20.059 "serial_number": "!'[t!N;BB/7t{0Qum\"ja7", 00:18:20.059 "method": "nvmf_create_subsystem", 00:18:20.059 "req_id": 1 00:18:20.059 } 00:18:20.059 Got JSON-RPC error response 00:18:20.059 response: 00:18:20.059 { 00:18:20.059 "code": -32602, 00:18:20.059 "message": "Invalid SN !'[t!N;BB/7t{0Qum\"ja7" 00:18:20.059 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.059 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.318 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:20.319 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'a3|CSomb%aR38KymP}WA}Hz.Y;y-xnYfYtm5hI:M' 00:18:20.319 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'a3|CSomb%aR38KymP}WA}Hz.Y;y-xnYfYtm5hI:M' nqn.2016-06.io.spdk:cnode55 00:18:20.578 [2024-12-10 12:21:27.233356] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode55: invalid model number 'a3|CSomb%aR38KymP}WA}Hz.Y;y-xnYfYtm5hI:M' 00:18:20.578 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:20.578 { 00:18:20.578 "nqn": "nqn.2016-06.io.spdk:cnode55", 00:18:20.578 "model_number": "a3|CSomb%aR38KymP}WA}Hz.Y;y-x\u007fnYfYtm5hI:M", 00:18:20.578 "method": "nvmf_create_subsystem", 00:18:20.578 "req_id": 1 00:18:20.578 } 00:18:20.578 Got JSON-RPC error response 00:18:20.578 response: 00:18:20.578 { 00:18:20.578 "code": -32602, 00:18:20.578 "message": "Invalid MN a3|CSomb%aR38KymP}WA}Hz.Y;y-x\u007fnYfYtm5hI:M" 00:18:20.578 }' 00:18:20.578 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:20.578 { 00:18:20.578 "nqn": "nqn.2016-06.io.spdk:cnode55", 00:18:20.578 "model_number": "a3|CSomb%aR38KymP}WA}Hz.Y;y-x\u007fnYfYtm5hI:M", 00:18:20.578 "method": "nvmf_create_subsystem", 00:18:20.578 "req_id": 1 00:18:20.578 } 00:18:20.579 Got JSON-RPC error response 00:18:20.579 response: 00:18:20.579 { 00:18:20.579 "code": -32602, 00:18:20.579 "message": "Invalid MN a3|CSomb%aR38KymP}WA}Hz.Y;y-x\u007fnYfYtm5hI:M" 00:18:20.579 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:20.579 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:20.837 [2024-12-10 12:21:27.434114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.837 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:21.095 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:21.095 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:21.095 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:21.095 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:21.095 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:21.095 [2024-12-10 12:21:27.860844] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:21.095 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:21.096 { 00:18:21.096 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:21.096 "listen_address": { 00:18:21.096 "trtype": "tcp", 00:18:21.096 "traddr": "", 00:18:21.096 "trsvcid": "4421" 00:18:21.096 }, 00:18:21.096 "method": "nvmf_subsystem_remove_listener", 00:18:21.096 "req_id": 1 00:18:21.096 } 00:18:21.096 Got JSON-RPC error response 00:18:21.096 response: 00:18:21.096 { 00:18:21.096 "code": -32602, 00:18:21.096 "message": "Invalid parameters" 00:18:21.096 }' 00:18:21.096 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:21.096 { 00:18:21.096 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:21.096 "listen_address": { 00:18:21.096 "trtype": "tcp", 00:18:21.096 "traddr": "", 00:18:21.096 "trsvcid": "4421" 00:18:21.096 }, 00:18:21.096 "method": "nvmf_subsystem_remove_listener", 00:18:21.096 "req_id": 1 00:18:21.096 } 00:18:21.096 Got JSON-RPC error response 00:18:21.096 response: 00:18:21.096 { 00:18:21.096 "code": -32602, 00:18:21.096 "message": "Invalid parameters" 00:18:21.096 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:21.096 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26816 -i 0 00:18:21.354 [2024-12-10 12:21:28.057452] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26816: invalid cntlid range [0-65519] 00:18:21.354 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:21.354 { 00:18:21.354 "nqn": "nqn.2016-06.io.spdk:cnode26816", 00:18:21.354 "min_cntlid": 0, 00:18:21.354 "method": "nvmf_create_subsystem", 00:18:21.354 "req_id": 1 00:18:21.354 } 00:18:21.354 Got JSON-RPC error response 00:18:21.354 response: 00:18:21.354 { 00:18:21.354 "code": -32602, 00:18:21.354 "message": "Invalid cntlid range [0-65519]" 00:18:21.354 }' 00:18:21.354 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:21.354 { 00:18:21.354 "nqn": "nqn.2016-06.io.spdk:cnode26816", 00:18:21.354 "min_cntlid": 0, 00:18:21.354 "method": "nvmf_create_subsystem", 00:18:21.354 "req_id": 1 00:18:21.354 } 00:18:21.354 Got JSON-RPC error response 00:18:21.354 response: 00:18:21.354 { 00:18:21.354 "code": -32602, 00:18:21.354 "message": "Invalid cntlid range [0-65519]" 00:18:21.354 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:21.354 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11746 -i 65520 00:18:21.612 [2024-12-10 12:21:28.254127] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11746: invalid cntlid range [65520-65519] 00:18:21.612 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:21.612 { 00:18:21.612 "nqn": "nqn.2016-06.io.spdk:cnode11746", 00:18:21.612 "min_cntlid": 65520, 00:18:21.612 "method": "nvmf_create_subsystem", 00:18:21.612 "req_id": 1 00:18:21.612 } 00:18:21.612 Got JSON-RPC error response 00:18:21.612 response: 00:18:21.612 { 00:18:21.612 "code": -32602, 00:18:21.612 "message": "Invalid cntlid range [65520-65519]" 00:18:21.612 }' 00:18:21.612 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:21.612 { 00:18:21.612 "nqn": "nqn.2016-06.io.spdk:cnode11746", 00:18:21.612 "min_cntlid": 65520, 00:18:21.612 "method": "nvmf_create_subsystem", 00:18:21.612 "req_id": 1 00:18:21.612 } 00:18:21.612 Got JSON-RPC error response 00:18:21.612 response: 00:18:21.612 { 00:18:21.612 "code": -32602, 00:18:21.612 "message": "Invalid cntlid range [65520-65519]" 00:18:21.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:21.612 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6830 -I 0 00:18:21.871 [2024-12-10 12:21:28.466852] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6830: invalid cntlid range [1-0] 00:18:21.871 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:21.871 { 00:18:21.871 "nqn": "nqn.2016-06.io.spdk:cnode6830", 00:18:21.871 "max_cntlid": 0, 00:18:21.871 "method": "nvmf_create_subsystem", 00:18:21.871 "req_id": 1 00:18:21.871 } 00:18:21.871 Got JSON-RPC error response 00:18:21.871 response: 00:18:21.871 { 00:18:21.871 "code": -32602, 00:18:21.871 "message": "Invalid cntlid range [1-0]" 00:18:21.871 }' 00:18:21.871 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:21.871 { 00:18:21.871 "nqn": "nqn.2016-06.io.spdk:cnode6830", 00:18:21.871 "max_cntlid": 0, 00:18:21.871 "method": "nvmf_create_subsystem", 00:18:21.871 "req_id": 1 00:18:21.871 } 00:18:21.871 Got JSON-RPC error response 00:18:21.871 response: 00:18:21.871 { 00:18:21.871 "code": -32602, 00:18:21.871 "message": "Invalid cntlid range [1-0]" 00:18:21.871 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:21.871 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23646 -I 65520 00:18:21.871 [2024-12-10 12:21:28.679628] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23646: invalid cntlid range [1-65520] 00:18:22.130 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:22.130 { 00:18:22.130 "nqn": "nqn.2016-06.io.spdk:cnode23646", 00:18:22.130 "max_cntlid": 65520, 00:18:22.130 "method": "nvmf_create_subsystem", 00:18:22.130 "req_id": 1 00:18:22.130 } 00:18:22.130 Got JSON-RPC error response 00:18:22.130 response: 00:18:22.130 { 00:18:22.130 "code": -32602, 00:18:22.130 "message": "Invalid cntlid range [1-65520]" 00:18:22.130 }' 00:18:22.130 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:22.130 { 00:18:22.130 "nqn": "nqn.2016-06.io.spdk:cnode23646", 00:18:22.130 "max_cntlid": 65520, 00:18:22.130 "method": "nvmf_create_subsystem", 00:18:22.130 "req_id": 1 00:18:22.130 } 00:18:22.130 Got JSON-RPC error response 00:18:22.130 response: 00:18:22.130 { 00:18:22.130 "code": -32602, 00:18:22.130 "message": "Invalid cntlid range [1-65520]" 00:18:22.130 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:22.130 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14094 -i 6 -I 5 00:18:22.130 [2024-12-10 12:21:28.884319] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14094: invalid cntlid range [6-5] 00:18:22.130 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:22.130 { 00:18:22.130 "nqn": "nqn.2016-06.io.spdk:cnode14094", 00:18:22.130 "min_cntlid": 6, 00:18:22.130 "max_cntlid": 5, 00:18:22.130 "method": "nvmf_create_subsystem", 00:18:22.130 "req_id": 1 00:18:22.130 } 00:18:22.130 Got JSON-RPC error response 00:18:22.130 response: 00:18:22.130 { 00:18:22.130 "code": -32602, 00:18:22.130 "message": "Invalid cntlid range [6-5]" 00:18:22.130 }' 00:18:22.130 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:22.130 { 00:18:22.130 "nqn": "nqn.2016-06.io.spdk:cnode14094", 00:18:22.130 "min_cntlid": 6, 00:18:22.130 "max_cntlid": 5, 00:18:22.130 "method": "nvmf_create_subsystem", 00:18:22.130 "req_id": 1 00:18:22.130 } 00:18:22.130 Got JSON-RPC error response 00:18:22.130 response: 00:18:22.130 { 00:18:22.130 "code": -32602, 00:18:22.130 "message": "Invalid cntlid range [6-5]" 00:18:22.130 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:22.130 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:22.389 { 00:18:22.389 "name": "foobar", 00:18:22.389 "method": "nvmf_delete_target", 00:18:22.389 "req_id": 1 00:18:22.389 } 00:18:22.389 Got JSON-RPC error response 00:18:22.389 response: 00:18:22.389 { 00:18:22.389 "code": -32602, 00:18:22.389 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:22.389 }' 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:22.389 { 00:18:22.389 "name": "foobar", 00:18:22.389 "method": "nvmf_delete_target", 00:18:22.389 "req_id": 1 00:18:22.389 } 00:18:22.389 Got JSON-RPC error response 00:18:22.389 response: 00:18:22.389 { 00:18:22.389 "code": -32602, 00:18:22.389 "message": "The specified target doesn't exist, cannot delete it." 00:18:22.389 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:22.389 rmmod nvme_tcp 00:18:22.389 rmmod nvme_fabrics 00:18:22.389 rmmod nvme_keyring 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:22.389 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3631481 ']' 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3631481 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3631481 ']' 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3631481 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3631481 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3631481' 00:18:22.390 killing process with pid 3631481 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3631481 00:18:22.390 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3631481 00:18:23.765 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.766 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.670 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:25.670 00:18:25.670 real 0m12.414s 00:18:25.670 user 0m23.608s 00:18:25.670 sys 0m4.594s 00:18:25.670 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.670 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:25.670 ************************************ 00:18:25.670 END TEST nvmf_invalid 00:18:25.670 ************************************ 00:18:25.670 12:21:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:25.670 12:21:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.670 12:21:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.670 12:21:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.670 ************************************ 00:18:25.670 START TEST nvmf_connect_stress 00:18:25.671 ************************************ 00:18:25.671 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:25.930 * Looking for test storage... 00:18:25.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.930 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:25.930 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:18:25.930 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:25.930 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:25.930 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.930 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.930 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:25.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.931 --rc genhtml_branch_coverage=1 00:18:25.931 --rc genhtml_function_coverage=1 00:18:25.931 --rc genhtml_legend=1 00:18:25.931 --rc geninfo_all_blocks=1 00:18:25.931 --rc geninfo_unexecuted_blocks=1 00:18:25.931 00:18:25.931 ' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:25.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.931 --rc genhtml_branch_coverage=1 00:18:25.931 --rc genhtml_function_coverage=1 00:18:25.931 --rc genhtml_legend=1 00:18:25.931 --rc geninfo_all_blocks=1 00:18:25.931 --rc geninfo_unexecuted_blocks=1 00:18:25.931 00:18:25.931 ' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:25.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.931 --rc genhtml_branch_coverage=1 00:18:25.931 --rc genhtml_function_coverage=1 00:18:25.931 --rc genhtml_legend=1 00:18:25.931 --rc geninfo_all_blocks=1 00:18:25.931 --rc geninfo_unexecuted_blocks=1 00:18:25.931 00:18:25.931 ' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:25.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.931 --rc genhtml_branch_coverage=1 00:18:25.931 --rc genhtml_function_coverage=1 00:18:25.931 --rc genhtml_legend=1 00:18:25.931 --rc geninfo_all_blocks=1 00:18:25.931 --rc geninfo_unexecuted_blocks=1 00:18:25.931 00:18:25.931 ' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:25.931 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.932 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.932 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.932 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:25.932 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:25.932 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:25.932 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:31.202 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:31.203 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:31.203 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:31.203 Found net devices under 0000:af:00.0: cvl_0_0 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:31.203 Found net devices under 0000:af:00.1: cvl_0_1 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:31.203 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:31.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:18:31.462 00:18:31.462 --- 10.0.0.2 ping statistics --- 00:18:31.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.462 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:18:31.462 00:18:31.462 --- 10.0.0.1 ping statistics --- 00:18:31.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.462 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3636001 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3636001 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3636001 ']' 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.462 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 [2024-12-10 12:21:38.276204] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:31.462 [2024-12-10 12:21:38.276291] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.721 [2024-12-10 12:21:38.391561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.721 [2024-12-10 12:21:38.499444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.721 [2024-12-10 12:21:38.499488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.721 [2024-12-10 12:21:38.499497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.721 [2024-12-10 12:21:38.499507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.721 [2024-12-10 12:21:38.499515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.721 [2024-12-10 12:21:38.501731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.721 [2024-12-10 12:21:38.501798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.721 [2024-12-10 12:21:38.501806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.288 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.288 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:32.289 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.289 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.289 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.728 [2024-12-10 12:21:39.143223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.728 [2024-12-10 12:21:39.167452] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:32.728 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.729 NULL1 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3636042 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.729 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.988 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.988 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:32.988 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.988 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.988 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.247 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:33.247 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.247 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.247 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.506 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.506 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:33.506 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.506 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.506 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.074 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.074 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:34.074 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.074 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.074 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.333 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.333 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:34.333 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.333 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.333 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.591 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.591 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:34.591 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.591 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.591 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.850 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.850 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:34.850 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.850 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.850 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.109 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.109 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:35.109 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.109 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.109 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.677 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.677 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:35.677 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.677 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.677 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.935 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.935 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:35.935 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.935 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.935 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.194 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.194 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:36.194 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.194 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.194 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.452 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.452 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:36.452 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.452 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.452 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.020 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.020 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:37.020 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.020 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.020 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.309 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.309 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:37.309 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.309 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.309 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.568 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.568 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:37.568 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.569 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.569 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.828 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.828 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:37.828 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.828 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.828 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.087 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.087 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:38.087 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.087 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.087 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.654 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.654 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:38.654 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.654 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.654 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.913 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.913 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:38.913 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.913 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.913 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.172 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.172 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:39.172 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.172 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.172 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.430 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.430 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:39.430 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.431 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.431 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.689 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.689 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:39.689 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.689 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.689 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.258 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.258 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:40.258 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.258 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.258 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.517 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.517 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:40.517 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.517 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.517 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.776 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.776 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:40.776 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.776 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.776 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.034 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.034 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:41.034 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.034 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.034 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.602 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.602 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:41.602 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.602 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.602 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.861 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.861 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:41.861 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.861 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.861 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.120 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.120 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:42.120 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.120 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.120 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.379 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.379 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:42.379 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.379 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.379 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.638 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3636042 00:18:42.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3636042) - No such process 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3636042 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.638 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.638 rmmod nvme_tcp 00:18:42.897 rmmod nvme_fabrics 00:18:42.897 rmmod nvme_keyring 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3636001 ']' 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3636001 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3636001 ']' 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3636001 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3636001 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3636001' 00:18:42.897 killing process with pid 3636001 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3636001 00:18:42.897 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3636001 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.275 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.178 00:18:46.178 real 0m20.356s 00:18:46.178 user 0m44.029s 00:18:46.178 sys 0m8.004s 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.178 ************************************ 00:18:46.178 END TEST nvmf_connect_stress 00:18:46.178 ************************************ 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.178 ************************************ 00:18:46.178 START TEST nvmf_fused_ordering 00:18:46.178 ************************************ 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:46.178 * Looking for test storage... 00:18:46.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:46.178 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:46.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.437 --rc genhtml_branch_coverage=1 00:18:46.437 --rc genhtml_function_coverage=1 00:18:46.437 --rc genhtml_legend=1 00:18:46.437 --rc geninfo_all_blocks=1 00:18:46.437 --rc geninfo_unexecuted_blocks=1 00:18:46.437 00:18:46.437 ' 00:18:46.437 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.438 --rc genhtml_branch_coverage=1 00:18:46.438 --rc genhtml_function_coverage=1 00:18:46.438 --rc genhtml_legend=1 00:18:46.438 --rc geninfo_all_blocks=1 00:18:46.438 --rc geninfo_unexecuted_blocks=1 00:18:46.438 00:18:46.438 ' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.438 --rc genhtml_branch_coverage=1 00:18:46.438 --rc genhtml_function_coverage=1 00:18:46.438 --rc genhtml_legend=1 00:18:46.438 --rc geninfo_all_blocks=1 00:18:46.438 --rc geninfo_unexecuted_blocks=1 00:18:46.438 00:18:46.438 ' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.438 --rc genhtml_branch_coverage=1 00:18:46.438 --rc genhtml_function_coverage=1 00:18:46.438 --rc genhtml_legend=1 00:18:46.438 --rc geninfo_all_blocks=1 00:18:46.438 --rc geninfo_unexecuted_blocks=1 00:18:46.438 00:18:46.438 ' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.438 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:51.715 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:51.715 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:51.715 Found net devices under 0000:af:00.0: cvl_0_0 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:51.715 Found net devices under 0000:af:00.1: cvl_0_1 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.715 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:51.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:18:51.716 00:18:51.716 --- 10.0.0.2 ping statistics --- 00:18:51.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.716 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:51.716 00:18:51.716 --- 10.0.0.1 ping statistics --- 00:18:51.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.716 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3641305 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3641305 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3641305 ']' 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.716 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:51.716 [2024-12-10 12:21:57.980197] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:51.716 [2024-12-10 12:21:57.980286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.716 [2024-12-10 12:21:58.096662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.716 [2024-12-10 12:21:58.198503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.716 [2024-12-10 12:21:58.198548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.716 [2024-12-10 12:21:58.198558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.716 [2024-12-10 12:21:58.198568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.716 [2024-12-10 12:21:58.198575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.716 [2024-12-10 12:21:58.199900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.974 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.974 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:51.974 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.974 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.974 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.233 [2024-12-10 12:21:58.805308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.233 [2024-12-10 12:21:58.821484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.233 NULL1 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.233 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:52.233 [2024-12-10 12:21:58.895533] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:52.233 [2024-12-10 12:21:58.895590] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641447 ] 00:18:52.492 Attached to nqn.2016-06.io.spdk:cnode1 00:18:52.492 Namespace ID: 1 size: 1GB 00:18:52.492 fused_ordering(0) 00:18:52.492 fused_ordering(1) 00:18:52.492 fused_ordering(2) 00:18:52.492 fused_ordering(3) 00:18:52.492 fused_ordering(4) 00:18:52.492 fused_ordering(5) 00:18:52.492 fused_ordering(6) 00:18:52.492 fused_ordering(7) 00:18:52.492 fused_ordering(8) 00:18:52.492 fused_ordering(9) 00:18:52.492 fused_ordering(10) 00:18:52.492 fused_ordering(11) 00:18:52.492 fused_ordering(12) 00:18:52.492 fused_ordering(13) 00:18:52.492 fused_ordering(14) 00:18:52.492 fused_ordering(15) 00:18:52.492 fused_ordering(16) 00:18:52.492 fused_ordering(17) 00:18:52.492 fused_ordering(18) 00:18:52.492 fused_ordering(19) 00:18:52.492 fused_ordering(20) 00:18:52.492 fused_ordering(21) 00:18:52.492 fused_ordering(22) 00:18:52.492 fused_ordering(23) 00:18:52.492 fused_ordering(24) 00:18:52.492 fused_ordering(25) 00:18:52.492 fused_ordering(26) 00:18:52.492 fused_ordering(27) 00:18:52.492 fused_ordering(28) 00:18:52.492 fused_ordering(29) 00:18:52.492 fused_ordering(30) 00:18:52.492 fused_ordering(31) 00:18:52.492 fused_ordering(32) 00:18:52.492 fused_ordering(33) 00:18:52.492 fused_ordering(34) 00:18:52.492 fused_ordering(35) 00:18:52.492 fused_ordering(36) 00:18:52.492 fused_ordering(37) 00:18:52.492 fused_ordering(38) 00:18:52.492 fused_ordering(39) 00:18:52.492 fused_ordering(40) 00:18:52.492 fused_ordering(41) 00:18:52.492 fused_ordering(42) 00:18:52.492 fused_ordering(43) 00:18:52.492 fused_ordering(44) 00:18:52.492 fused_ordering(45) 00:18:52.492 fused_ordering(46) 00:18:52.492 fused_ordering(47) 00:18:52.492 fused_ordering(48) 00:18:52.492 fused_ordering(49) 00:18:52.492 fused_ordering(50) 00:18:52.492 fused_ordering(51) 00:18:52.492 fused_ordering(52) 00:18:52.492 fused_ordering(53) 00:18:52.492 fused_ordering(54) 00:18:52.492 fused_ordering(55) 00:18:52.492 fused_ordering(56) 00:18:52.492 fused_ordering(57) 00:18:52.492 fused_ordering(58) 00:18:52.492 fused_ordering(59) 00:18:52.492 fused_ordering(60) 00:18:52.492 fused_ordering(61) 00:18:52.492 fused_ordering(62) 00:18:52.492 fused_ordering(63) 00:18:52.492 fused_ordering(64) 00:18:52.492 fused_ordering(65) 00:18:52.492 fused_ordering(66) 00:18:52.492 fused_ordering(67) 00:18:52.492 fused_ordering(68) 00:18:52.492 fused_ordering(69) 00:18:52.492 fused_ordering(70) 00:18:52.492 fused_ordering(71) 00:18:52.492 fused_ordering(72) 00:18:52.492 fused_ordering(73) 00:18:52.492 fused_ordering(74) 00:18:52.492 fused_ordering(75) 00:18:52.492 fused_ordering(76) 00:18:52.492 fused_ordering(77) 00:18:52.492 fused_ordering(78) 00:18:52.492 fused_ordering(79) 00:18:52.492 fused_ordering(80) 00:18:52.492 fused_ordering(81) 00:18:52.492 fused_ordering(82) 00:18:52.492 fused_ordering(83) 00:18:52.492 fused_ordering(84) 00:18:52.492 fused_ordering(85) 00:18:52.492 fused_ordering(86) 00:18:52.492 fused_ordering(87) 00:18:52.492 fused_ordering(88) 00:18:52.492 fused_ordering(89) 00:18:52.492 fused_ordering(90) 00:18:52.492 fused_ordering(91) 00:18:52.492 fused_ordering(92) 00:18:52.492 fused_ordering(93) 00:18:52.492 fused_ordering(94) 00:18:52.492 fused_ordering(95) 00:18:52.492 fused_ordering(96) 00:18:52.492 fused_ordering(97) 00:18:52.492 fused_ordering(98) 00:18:52.492 fused_ordering(99) 00:18:52.492 fused_ordering(100) 00:18:52.492 fused_ordering(101) 00:18:52.492 fused_ordering(102) 00:18:52.492 fused_ordering(103) 00:18:52.492 fused_ordering(104) 00:18:52.492 fused_ordering(105) 00:18:52.492 fused_ordering(106) 00:18:52.492 fused_ordering(107) 00:18:52.492 fused_ordering(108) 00:18:52.492 fused_ordering(109) 00:18:52.492 fused_ordering(110) 00:18:52.492 fused_ordering(111) 00:18:52.492 fused_ordering(112) 00:18:52.492 fused_ordering(113) 00:18:52.492 fused_ordering(114) 00:18:52.492 fused_ordering(115) 00:18:52.492 fused_ordering(116) 00:18:52.492 fused_ordering(117) 00:18:52.492 fused_ordering(118) 00:18:52.492 fused_ordering(119) 00:18:52.492 fused_ordering(120) 00:18:52.492 fused_ordering(121) 00:18:52.492 fused_ordering(122) 00:18:52.492 fused_ordering(123) 00:18:52.492 fused_ordering(124) 00:18:52.492 fused_ordering(125) 00:18:52.492 fused_ordering(126) 00:18:52.492 fused_ordering(127) 00:18:52.492 fused_ordering(128) 00:18:52.492 fused_ordering(129) 00:18:52.492 fused_ordering(130) 00:18:52.492 fused_ordering(131) 00:18:52.492 fused_ordering(132) 00:18:52.492 fused_ordering(133) 00:18:52.492 fused_ordering(134) 00:18:52.492 fused_ordering(135) 00:18:52.492 fused_ordering(136) 00:18:52.492 fused_ordering(137) 00:18:52.492 fused_ordering(138) 00:18:52.492 fused_ordering(139) 00:18:52.492 fused_ordering(140) 00:18:52.492 fused_ordering(141) 00:18:52.492 fused_ordering(142) 00:18:52.492 fused_ordering(143) 00:18:52.492 fused_ordering(144) 00:18:52.492 fused_ordering(145) 00:18:52.493 fused_ordering(146) 00:18:52.493 fused_ordering(147) 00:18:52.493 fused_ordering(148) 00:18:52.493 fused_ordering(149) 00:18:52.493 fused_ordering(150) 00:18:52.493 fused_ordering(151) 00:18:52.493 fused_ordering(152) 00:18:52.493 fused_ordering(153) 00:18:52.493 fused_ordering(154) 00:18:52.493 fused_ordering(155) 00:18:52.493 fused_ordering(156) 00:18:52.493 fused_ordering(157) 00:18:52.493 fused_ordering(158) 00:18:52.493 fused_ordering(159) 00:18:52.493 fused_ordering(160) 00:18:52.493 fused_ordering(161) 00:18:52.493 fused_ordering(162) 00:18:52.493 fused_ordering(163) 00:18:52.493 fused_ordering(164) 00:18:52.493 fused_ordering(165) 00:18:52.493 fused_ordering(166) 00:18:52.493 fused_ordering(167) 00:18:52.493 fused_ordering(168) 00:18:52.493 fused_ordering(169) 00:18:52.493 fused_ordering(170) 00:18:52.493 fused_ordering(171) 00:18:52.493 fused_ordering(172) 00:18:52.493 fused_ordering(173) 00:18:52.493 fused_ordering(174) 00:18:52.493 fused_ordering(175) 00:18:52.493 fused_ordering(176) 00:18:52.493 fused_ordering(177) 00:18:52.493 fused_ordering(178) 00:18:52.493 fused_ordering(179) 00:18:52.493 fused_ordering(180) 00:18:52.493 fused_ordering(181) 00:18:52.493 fused_ordering(182) 00:18:52.493 fused_ordering(183) 00:18:52.493 fused_ordering(184) 00:18:52.493 fused_ordering(185) 00:18:52.493 fused_ordering(186) 00:18:52.493 fused_ordering(187) 00:18:52.493 fused_ordering(188) 00:18:52.493 fused_ordering(189) 00:18:52.493 fused_ordering(190) 00:18:52.493 fused_ordering(191) 00:18:52.493 fused_ordering(192) 00:18:52.493 fused_ordering(193) 00:18:52.493 fused_ordering(194) 00:18:52.493 fused_ordering(195) 00:18:52.493 fused_ordering(196) 00:18:52.493 fused_ordering(197) 00:18:52.493 fused_ordering(198) 00:18:52.493 fused_ordering(199) 00:18:52.493 fused_ordering(200) 00:18:52.493 fused_ordering(201) 00:18:52.493 fused_ordering(202) 00:18:52.493 fused_ordering(203) 00:18:52.493 fused_ordering(204) 00:18:52.493 fused_ordering(205) 00:18:53.061 fused_ordering(206) 00:18:53.061 fused_ordering(207) 00:18:53.061 fused_ordering(208) 00:18:53.061 fused_ordering(209) 00:18:53.061 fused_ordering(210) 00:18:53.061 fused_ordering(211) 00:18:53.061 fused_ordering(212) 00:18:53.061 fused_ordering(213) 00:18:53.061 fused_ordering(214) 00:18:53.061 fused_ordering(215) 00:18:53.061 fused_ordering(216) 00:18:53.061 fused_ordering(217) 00:18:53.061 fused_ordering(218) 00:18:53.061 fused_ordering(219) 00:18:53.061 fused_ordering(220) 00:18:53.061 fused_ordering(221) 00:18:53.061 fused_ordering(222) 00:18:53.061 fused_ordering(223) 00:18:53.061 fused_ordering(224) 00:18:53.061 fused_ordering(225) 00:18:53.061 fused_ordering(226) 00:18:53.061 fused_ordering(227) 00:18:53.061 fused_ordering(228) 00:18:53.061 fused_ordering(229) 00:18:53.061 fused_ordering(230) 00:18:53.061 fused_ordering(231) 00:18:53.061 fused_ordering(232) 00:18:53.061 fused_ordering(233) 00:18:53.061 fused_ordering(234) 00:18:53.061 fused_ordering(235) 00:18:53.061 fused_ordering(236) 00:18:53.061 fused_ordering(237) 00:18:53.061 fused_ordering(238) 00:18:53.061 fused_ordering(239) 00:18:53.061 fused_ordering(240) 00:18:53.061 fused_ordering(241) 00:18:53.061 fused_ordering(242) 00:18:53.061 fused_ordering(243) 00:18:53.061 fused_ordering(244) 00:18:53.061 fused_ordering(245) 00:18:53.061 fused_ordering(246) 00:18:53.061 fused_ordering(247) 00:18:53.061 fused_ordering(248) 00:18:53.061 fused_ordering(249) 00:18:53.061 fused_ordering(250) 00:18:53.061 fused_ordering(251) 00:18:53.061 fused_ordering(252) 00:18:53.061 fused_ordering(253) 00:18:53.061 fused_ordering(254) 00:18:53.061 fused_ordering(255) 00:18:53.061 fused_ordering(256) 00:18:53.061 fused_ordering(257) 00:18:53.061 fused_ordering(258) 00:18:53.061 fused_ordering(259) 00:18:53.061 fused_ordering(260) 00:18:53.061 fused_ordering(261) 00:18:53.061 fused_ordering(262) 00:18:53.061 fused_ordering(263) 00:18:53.061 fused_ordering(264) 00:18:53.061 fused_ordering(265) 00:18:53.061 fused_ordering(266) 00:18:53.061 fused_ordering(267) 00:18:53.061 fused_ordering(268) 00:18:53.061 fused_ordering(269) 00:18:53.061 fused_ordering(270) 00:18:53.061 fused_ordering(271) 00:18:53.061 fused_ordering(272) 00:18:53.061 fused_ordering(273) 00:18:53.061 fused_ordering(274) 00:18:53.061 fused_ordering(275) 00:18:53.061 fused_ordering(276) 00:18:53.061 fused_ordering(277) 00:18:53.061 fused_ordering(278) 00:18:53.061 fused_ordering(279) 00:18:53.061 fused_ordering(280) 00:18:53.061 fused_ordering(281) 00:18:53.061 fused_ordering(282) 00:18:53.061 fused_ordering(283) 00:18:53.061 fused_ordering(284) 00:18:53.061 fused_ordering(285) 00:18:53.061 fused_ordering(286) 00:18:53.061 fused_ordering(287) 00:18:53.061 fused_ordering(288) 00:18:53.061 fused_ordering(289) 00:18:53.061 fused_ordering(290) 00:18:53.061 fused_ordering(291) 00:18:53.061 fused_ordering(292) 00:18:53.061 fused_ordering(293) 00:18:53.061 fused_ordering(294) 00:18:53.061 fused_ordering(295) 00:18:53.061 fused_ordering(296) 00:18:53.061 fused_ordering(297) 00:18:53.061 fused_ordering(298) 00:18:53.061 fused_ordering(299) 00:18:53.061 fused_ordering(300) 00:18:53.061 fused_ordering(301) 00:18:53.061 fused_ordering(302) 00:18:53.061 fused_ordering(303) 00:18:53.061 fused_ordering(304) 00:18:53.061 fused_ordering(305) 00:18:53.061 fused_ordering(306) 00:18:53.061 fused_ordering(307) 00:18:53.061 fused_ordering(308) 00:18:53.061 fused_ordering(309) 00:18:53.061 fused_ordering(310) 00:18:53.061 fused_ordering(311) 00:18:53.061 fused_ordering(312) 00:18:53.061 fused_ordering(313) 00:18:53.061 fused_ordering(314) 00:18:53.061 fused_ordering(315) 00:18:53.061 fused_ordering(316) 00:18:53.061 fused_ordering(317) 00:18:53.061 fused_ordering(318) 00:18:53.061 fused_ordering(319) 00:18:53.061 fused_ordering(320) 00:18:53.061 fused_ordering(321) 00:18:53.061 fused_ordering(322) 00:18:53.061 fused_ordering(323) 00:18:53.061 fused_ordering(324) 00:18:53.061 fused_ordering(325) 00:18:53.061 fused_ordering(326) 00:18:53.061 fused_ordering(327) 00:18:53.061 fused_ordering(328) 00:18:53.061 fused_ordering(329) 00:18:53.061 fused_ordering(330) 00:18:53.061 fused_ordering(331) 00:18:53.061 fused_ordering(332) 00:18:53.061 fused_ordering(333) 00:18:53.061 fused_ordering(334) 00:18:53.061 fused_ordering(335) 00:18:53.061 fused_ordering(336) 00:18:53.061 fused_ordering(337) 00:18:53.061 fused_ordering(338) 00:18:53.061 fused_ordering(339) 00:18:53.061 fused_ordering(340) 00:18:53.061 fused_ordering(341) 00:18:53.061 fused_ordering(342) 00:18:53.061 fused_ordering(343) 00:18:53.061 fused_ordering(344) 00:18:53.061 fused_ordering(345) 00:18:53.061 fused_ordering(346) 00:18:53.061 fused_ordering(347) 00:18:53.061 fused_ordering(348) 00:18:53.061 fused_ordering(349) 00:18:53.061 fused_ordering(350) 00:18:53.061 fused_ordering(351) 00:18:53.061 fused_ordering(352) 00:18:53.061 fused_ordering(353) 00:18:53.061 fused_ordering(354) 00:18:53.061 fused_ordering(355) 00:18:53.061 fused_ordering(356) 00:18:53.061 fused_ordering(357) 00:18:53.061 fused_ordering(358) 00:18:53.061 fused_ordering(359) 00:18:53.061 fused_ordering(360) 00:18:53.061 fused_ordering(361) 00:18:53.061 fused_ordering(362) 00:18:53.061 fused_ordering(363) 00:18:53.061 fused_ordering(364) 00:18:53.061 fused_ordering(365) 00:18:53.061 fused_ordering(366) 00:18:53.061 fused_ordering(367) 00:18:53.061 fused_ordering(368) 00:18:53.061 fused_ordering(369) 00:18:53.061 fused_ordering(370) 00:18:53.061 fused_ordering(371) 00:18:53.061 fused_ordering(372) 00:18:53.061 fused_ordering(373) 00:18:53.061 fused_ordering(374) 00:18:53.061 fused_ordering(375) 00:18:53.061 fused_ordering(376) 00:18:53.061 fused_ordering(377) 00:18:53.061 fused_ordering(378) 00:18:53.061 fused_ordering(379) 00:18:53.061 fused_ordering(380) 00:18:53.061 fused_ordering(381) 00:18:53.061 fused_ordering(382) 00:18:53.061 fused_ordering(383) 00:18:53.061 fused_ordering(384) 00:18:53.061 fused_ordering(385) 00:18:53.061 fused_ordering(386) 00:18:53.061 fused_ordering(387) 00:18:53.061 fused_ordering(388) 00:18:53.061 fused_ordering(389) 00:18:53.062 fused_ordering(390) 00:18:53.062 fused_ordering(391) 00:18:53.062 fused_ordering(392) 00:18:53.062 fused_ordering(393) 00:18:53.062 fused_ordering(394) 00:18:53.062 fused_ordering(395) 00:18:53.062 fused_ordering(396) 00:18:53.062 fused_ordering(397) 00:18:53.062 fused_ordering(398) 00:18:53.062 fused_ordering(399) 00:18:53.062 fused_ordering(400) 00:18:53.062 fused_ordering(401) 00:18:53.062 fused_ordering(402) 00:18:53.062 fused_ordering(403) 00:18:53.062 fused_ordering(404) 00:18:53.062 fused_ordering(405) 00:18:53.062 fused_ordering(406) 00:18:53.062 fused_ordering(407) 00:18:53.062 fused_ordering(408) 00:18:53.062 fused_ordering(409) 00:18:53.062 fused_ordering(410) 00:18:53.320 fused_ordering(411) 00:18:53.320 fused_ordering(412) 00:18:53.320 fused_ordering(413) 00:18:53.320 fused_ordering(414) 00:18:53.320 fused_ordering(415) 00:18:53.320 fused_ordering(416) 00:18:53.320 fused_ordering(417) 00:18:53.320 fused_ordering(418) 00:18:53.320 fused_ordering(419) 00:18:53.320 fused_ordering(420) 00:18:53.320 fused_ordering(421) 00:18:53.320 fused_ordering(422) 00:18:53.320 fused_ordering(423) 00:18:53.320 fused_ordering(424) 00:18:53.320 fused_ordering(425) 00:18:53.320 fused_ordering(426) 00:18:53.320 fused_ordering(427) 00:18:53.320 fused_ordering(428) 00:18:53.320 fused_ordering(429) 00:18:53.320 fused_ordering(430) 00:18:53.320 fused_ordering(431) 00:18:53.320 fused_ordering(432) 00:18:53.320 fused_ordering(433) 00:18:53.320 fused_ordering(434) 00:18:53.320 fused_ordering(435) 00:18:53.320 fused_ordering(436) 00:18:53.320 fused_ordering(437) 00:18:53.320 fused_ordering(438) 00:18:53.320 fused_ordering(439) 00:18:53.320 fused_ordering(440) 00:18:53.320 fused_ordering(441) 00:18:53.320 fused_ordering(442) 00:18:53.320 fused_ordering(443) 00:18:53.320 fused_ordering(444) 00:18:53.320 fused_ordering(445) 00:18:53.320 fused_ordering(446) 00:18:53.320 fused_ordering(447) 00:18:53.320 fused_ordering(448) 00:18:53.320 fused_ordering(449) 00:18:53.320 fused_ordering(450) 00:18:53.320 fused_ordering(451) 00:18:53.320 fused_ordering(452) 00:18:53.320 fused_ordering(453) 00:18:53.320 fused_ordering(454) 00:18:53.320 fused_ordering(455) 00:18:53.320 fused_ordering(456) 00:18:53.320 fused_ordering(457) 00:18:53.320 fused_ordering(458) 00:18:53.320 fused_ordering(459) 00:18:53.320 fused_ordering(460) 00:18:53.320 fused_ordering(461) 00:18:53.320 fused_ordering(462) 00:18:53.320 fused_ordering(463) 00:18:53.320 fused_ordering(464) 00:18:53.320 fused_ordering(465) 00:18:53.320 fused_ordering(466) 00:18:53.320 fused_ordering(467) 00:18:53.320 fused_ordering(468) 00:18:53.320 fused_ordering(469) 00:18:53.320 fused_ordering(470) 00:18:53.320 fused_ordering(471) 00:18:53.320 fused_ordering(472) 00:18:53.320 fused_ordering(473) 00:18:53.320 fused_ordering(474) 00:18:53.320 fused_ordering(475) 00:18:53.320 fused_ordering(476) 00:18:53.320 fused_ordering(477) 00:18:53.320 fused_ordering(478) 00:18:53.320 fused_ordering(479) 00:18:53.320 fused_ordering(480) 00:18:53.320 fused_ordering(481) 00:18:53.320 fused_ordering(482) 00:18:53.320 fused_ordering(483) 00:18:53.320 fused_ordering(484) 00:18:53.320 fused_ordering(485) 00:18:53.320 fused_ordering(486) 00:18:53.320 fused_ordering(487) 00:18:53.320 fused_ordering(488) 00:18:53.320 fused_ordering(489) 00:18:53.320 fused_ordering(490) 00:18:53.320 fused_ordering(491) 00:18:53.320 fused_ordering(492) 00:18:53.320 fused_ordering(493) 00:18:53.320 fused_ordering(494) 00:18:53.320 fused_ordering(495) 00:18:53.320 fused_ordering(496) 00:18:53.320 fused_ordering(497) 00:18:53.320 fused_ordering(498) 00:18:53.320 fused_ordering(499) 00:18:53.320 fused_ordering(500) 00:18:53.320 fused_ordering(501) 00:18:53.320 fused_ordering(502) 00:18:53.320 fused_ordering(503) 00:18:53.320 fused_ordering(504) 00:18:53.320 fused_ordering(505) 00:18:53.320 fused_ordering(506) 00:18:53.320 fused_ordering(507) 00:18:53.320 fused_ordering(508) 00:18:53.320 fused_ordering(509) 00:18:53.320 fused_ordering(510) 00:18:53.320 fused_ordering(511) 00:18:53.320 fused_ordering(512) 00:18:53.320 fused_ordering(513) 00:18:53.320 fused_ordering(514) 00:18:53.320 fused_ordering(515) 00:18:53.320 fused_ordering(516) 00:18:53.320 fused_ordering(517) 00:18:53.320 fused_ordering(518) 00:18:53.320 fused_ordering(519) 00:18:53.320 fused_ordering(520) 00:18:53.320 fused_ordering(521) 00:18:53.320 fused_ordering(522) 00:18:53.320 fused_ordering(523) 00:18:53.320 fused_ordering(524) 00:18:53.320 fused_ordering(525) 00:18:53.320 fused_ordering(526) 00:18:53.320 fused_ordering(527) 00:18:53.320 fused_ordering(528) 00:18:53.320 fused_ordering(529) 00:18:53.320 fused_ordering(530) 00:18:53.320 fused_ordering(531) 00:18:53.320 fused_ordering(532) 00:18:53.320 fused_ordering(533) 00:18:53.320 fused_ordering(534) 00:18:53.320 fused_ordering(535) 00:18:53.321 fused_ordering(536) 00:18:53.321 fused_ordering(537) 00:18:53.321 fused_ordering(538) 00:18:53.321 fused_ordering(539) 00:18:53.321 fused_ordering(540) 00:18:53.321 fused_ordering(541) 00:18:53.321 fused_ordering(542) 00:18:53.321 fused_ordering(543) 00:18:53.321 fused_ordering(544) 00:18:53.321 fused_ordering(545) 00:18:53.321 fused_ordering(546) 00:18:53.321 fused_ordering(547) 00:18:53.321 fused_ordering(548) 00:18:53.321 fused_ordering(549) 00:18:53.321 fused_ordering(550) 00:18:53.321 fused_ordering(551) 00:18:53.321 fused_ordering(552) 00:18:53.321 fused_ordering(553) 00:18:53.321 fused_ordering(554) 00:18:53.321 fused_ordering(555) 00:18:53.321 fused_ordering(556) 00:18:53.321 fused_ordering(557) 00:18:53.321 fused_ordering(558) 00:18:53.321 fused_ordering(559) 00:18:53.321 fused_ordering(560) 00:18:53.321 fused_ordering(561) 00:18:53.321 fused_ordering(562) 00:18:53.321 fused_ordering(563) 00:18:53.321 fused_ordering(564) 00:18:53.321 fused_ordering(565) 00:18:53.321 fused_ordering(566) 00:18:53.321 fused_ordering(567) 00:18:53.321 fused_ordering(568) 00:18:53.321 fused_ordering(569) 00:18:53.321 fused_ordering(570) 00:18:53.321 fused_ordering(571) 00:18:53.321 fused_ordering(572) 00:18:53.321 fused_ordering(573) 00:18:53.321 fused_ordering(574) 00:18:53.321 fused_ordering(575) 00:18:53.321 fused_ordering(576) 00:18:53.321 fused_ordering(577) 00:18:53.321 fused_ordering(578) 00:18:53.321 fused_ordering(579) 00:18:53.321 fused_ordering(580) 00:18:53.321 fused_ordering(581) 00:18:53.321 fused_ordering(582) 00:18:53.321 fused_ordering(583) 00:18:53.321 fused_ordering(584) 00:18:53.321 fused_ordering(585) 00:18:53.321 fused_ordering(586) 00:18:53.321 fused_ordering(587) 00:18:53.321 fused_ordering(588) 00:18:53.321 fused_ordering(589) 00:18:53.321 fused_ordering(590) 00:18:53.321 fused_ordering(591) 00:18:53.321 fused_ordering(592) 00:18:53.321 fused_ordering(593) 00:18:53.321 fused_ordering(594) 00:18:53.321 fused_ordering(595) 00:18:53.321 fused_ordering(596) 00:18:53.321 fused_ordering(597) 00:18:53.321 fused_ordering(598) 00:18:53.321 fused_ordering(599) 00:18:53.321 fused_ordering(600) 00:18:53.321 fused_ordering(601) 00:18:53.321 fused_ordering(602) 00:18:53.321 fused_ordering(603) 00:18:53.321 fused_ordering(604) 00:18:53.321 fused_ordering(605) 00:18:53.321 fused_ordering(606) 00:18:53.321 fused_ordering(607) 00:18:53.321 fused_ordering(608) 00:18:53.321 fused_ordering(609) 00:18:53.321 fused_ordering(610) 00:18:53.321 fused_ordering(611) 00:18:53.321 fused_ordering(612) 00:18:53.321 fused_ordering(613) 00:18:53.321 fused_ordering(614) 00:18:53.321 fused_ordering(615) 00:18:53.888 fused_ordering(616) 00:18:53.888 fused_ordering(617) 00:18:53.888 fused_ordering(618) 00:18:53.888 fused_ordering(619) 00:18:53.888 fused_ordering(620) 00:18:53.888 fused_ordering(621) 00:18:53.888 fused_ordering(622) 00:18:53.888 fused_ordering(623) 00:18:53.888 fused_ordering(624) 00:18:53.888 fused_ordering(625) 00:18:53.888 fused_ordering(626) 00:18:53.888 fused_ordering(627) 00:18:53.888 fused_ordering(628) 00:18:53.888 fused_ordering(629) 00:18:53.888 fused_ordering(630) 00:18:53.888 fused_ordering(631) 00:18:53.888 fused_ordering(632) 00:18:53.888 fused_ordering(633) 00:18:53.888 fused_ordering(634) 00:18:53.888 fused_ordering(635) 00:18:53.888 fused_ordering(636) 00:18:53.888 fused_ordering(637) 00:18:53.888 fused_ordering(638) 00:18:53.888 fused_ordering(639) 00:18:53.888 fused_ordering(640) 00:18:53.888 fused_ordering(641) 00:18:53.888 fused_ordering(642) 00:18:53.888 fused_ordering(643) 00:18:53.888 fused_ordering(644) 00:18:53.888 fused_ordering(645) 00:18:53.888 fused_ordering(646) 00:18:53.888 fused_ordering(647) 00:18:53.888 fused_ordering(648) 00:18:53.888 fused_ordering(649) 00:18:53.888 fused_ordering(650) 00:18:53.888 fused_ordering(651) 00:18:53.888 fused_ordering(652) 00:18:53.888 fused_ordering(653) 00:18:53.888 fused_ordering(654) 00:18:53.888 fused_ordering(655) 00:18:53.888 fused_ordering(656) 00:18:53.888 fused_ordering(657) 00:18:53.888 fused_ordering(658) 00:18:53.888 fused_ordering(659) 00:18:53.888 fused_ordering(660) 00:18:53.888 fused_ordering(661) 00:18:53.888 fused_ordering(662) 00:18:53.888 fused_ordering(663) 00:18:53.888 fused_ordering(664) 00:18:53.888 fused_ordering(665) 00:18:53.888 fused_ordering(666) 00:18:53.888 fused_ordering(667) 00:18:53.888 fused_ordering(668) 00:18:53.888 fused_ordering(669) 00:18:53.888 fused_ordering(670) 00:18:53.888 fused_ordering(671) 00:18:53.888 fused_ordering(672) 00:18:53.888 fused_ordering(673) 00:18:53.888 fused_ordering(674) 00:18:53.888 fused_ordering(675) 00:18:53.888 fused_ordering(676) 00:18:53.888 fused_ordering(677) 00:18:53.888 fused_ordering(678) 00:18:53.888 fused_ordering(679) 00:18:53.888 fused_ordering(680) 00:18:53.888 fused_ordering(681) 00:18:53.888 fused_ordering(682) 00:18:53.888 fused_ordering(683) 00:18:53.888 fused_ordering(684) 00:18:53.888 fused_ordering(685) 00:18:53.888 fused_ordering(686) 00:18:53.888 fused_ordering(687) 00:18:53.888 fused_ordering(688) 00:18:53.888 fused_ordering(689) 00:18:53.888 fused_ordering(690) 00:18:53.888 fused_ordering(691) 00:18:53.888 fused_ordering(692) 00:18:53.888 fused_ordering(693) 00:18:53.888 fused_ordering(694) 00:18:53.888 fused_ordering(695) 00:18:53.888 fused_ordering(696) 00:18:53.888 fused_ordering(697) 00:18:53.888 fused_ordering(698) 00:18:53.888 fused_ordering(699) 00:18:53.888 fused_ordering(700) 00:18:53.888 fused_ordering(701) 00:18:53.888 fused_ordering(702) 00:18:53.888 fused_ordering(703) 00:18:53.888 fused_ordering(704) 00:18:53.888 fused_ordering(705) 00:18:53.888 fused_ordering(706) 00:18:53.888 fused_ordering(707) 00:18:53.888 fused_ordering(708) 00:18:53.888 fused_ordering(709) 00:18:53.888 fused_ordering(710) 00:18:53.888 fused_ordering(711) 00:18:53.888 fused_ordering(712) 00:18:53.888 fused_ordering(713) 00:18:53.888 fused_ordering(714) 00:18:53.888 fused_ordering(715) 00:18:53.888 fused_ordering(716) 00:18:53.888 fused_ordering(717) 00:18:53.888 fused_ordering(718) 00:18:53.888 fused_ordering(719) 00:18:53.888 fused_ordering(720) 00:18:53.888 fused_ordering(721) 00:18:53.888 fused_ordering(722) 00:18:53.888 fused_ordering(723) 00:18:53.888 fused_ordering(724) 00:18:53.888 fused_ordering(725) 00:18:53.888 fused_ordering(726) 00:18:53.888 fused_ordering(727) 00:18:53.888 fused_ordering(728) 00:18:53.888 fused_ordering(729) 00:18:53.888 fused_ordering(730) 00:18:53.888 fused_ordering(731) 00:18:53.888 fused_ordering(732) 00:18:53.888 fused_ordering(733) 00:18:53.888 fused_ordering(734) 00:18:53.888 fused_ordering(735) 00:18:53.888 fused_ordering(736) 00:18:53.888 fused_ordering(737) 00:18:53.888 fused_ordering(738) 00:18:53.888 fused_ordering(739) 00:18:53.888 fused_ordering(740) 00:18:53.888 fused_ordering(741) 00:18:53.888 fused_ordering(742) 00:18:53.888 fused_ordering(743) 00:18:53.888 fused_ordering(744) 00:18:53.888 fused_ordering(745) 00:18:53.888 fused_ordering(746) 00:18:53.888 fused_ordering(747) 00:18:53.888 fused_ordering(748) 00:18:53.888 fused_ordering(749) 00:18:53.888 fused_ordering(750) 00:18:53.888 fused_ordering(751) 00:18:53.888 fused_ordering(752) 00:18:53.888 fused_ordering(753) 00:18:53.888 fused_ordering(754) 00:18:53.888 fused_ordering(755) 00:18:53.888 fused_ordering(756) 00:18:53.888 fused_ordering(757) 00:18:53.888 fused_ordering(758) 00:18:53.888 fused_ordering(759) 00:18:53.888 fused_ordering(760) 00:18:53.888 fused_ordering(761) 00:18:53.888 fused_ordering(762) 00:18:53.888 fused_ordering(763) 00:18:53.888 fused_ordering(764) 00:18:53.888 fused_ordering(765) 00:18:53.888 fused_ordering(766) 00:18:53.888 fused_ordering(767) 00:18:53.888 fused_ordering(768) 00:18:53.888 fused_ordering(769) 00:18:53.888 fused_ordering(770) 00:18:53.888 fused_ordering(771) 00:18:53.888 fused_ordering(772) 00:18:53.888 fused_ordering(773) 00:18:53.888 fused_ordering(774) 00:18:53.888 fused_ordering(775) 00:18:53.888 fused_ordering(776) 00:18:53.888 fused_ordering(777) 00:18:53.888 fused_ordering(778) 00:18:53.888 fused_ordering(779) 00:18:53.888 fused_ordering(780) 00:18:53.888 fused_ordering(781) 00:18:53.888 fused_ordering(782) 00:18:53.888 fused_ordering(783) 00:18:53.888 fused_ordering(784) 00:18:53.888 fused_ordering(785) 00:18:53.888 fused_ordering(786) 00:18:53.888 fused_ordering(787) 00:18:53.888 fused_ordering(788) 00:18:53.888 fused_ordering(789) 00:18:53.888 fused_ordering(790) 00:18:53.888 fused_ordering(791) 00:18:53.888 fused_ordering(792) 00:18:53.888 fused_ordering(793) 00:18:53.888 fused_ordering(794) 00:18:53.888 fused_ordering(795) 00:18:53.888 fused_ordering(796) 00:18:53.888 fused_ordering(797) 00:18:53.888 fused_ordering(798) 00:18:53.888 fused_ordering(799) 00:18:53.888 fused_ordering(800) 00:18:53.888 fused_ordering(801) 00:18:53.888 fused_ordering(802) 00:18:53.888 fused_ordering(803) 00:18:53.888 fused_ordering(804) 00:18:53.889 fused_ordering(805) 00:18:53.889 fused_ordering(806) 00:18:53.889 fused_ordering(807) 00:18:53.889 fused_ordering(808) 00:18:53.889 fused_ordering(809) 00:18:53.889 fused_ordering(810) 00:18:53.889 fused_ordering(811) 00:18:53.889 fused_ordering(812) 00:18:53.889 fused_ordering(813) 00:18:53.889 fused_ordering(814) 00:18:53.889 fused_ordering(815) 00:18:53.889 fused_ordering(816) 00:18:53.889 fused_ordering(817) 00:18:53.889 fused_ordering(818) 00:18:53.889 fused_ordering(819) 00:18:53.889 fused_ordering(820) 00:18:54.456 fused_ordering(821) 00:18:54.456 fused_ordering(822) 00:18:54.456 fused_ordering(823) 00:18:54.456 fused_ordering(824) 00:18:54.456 fused_ordering(825) 00:18:54.456 fused_ordering(826) 00:18:54.456 fused_ordering(827) 00:18:54.456 fused_ordering(828) 00:18:54.456 fused_ordering(829) 00:18:54.456 fused_ordering(830) 00:18:54.456 fused_ordering(831) 00:18:54.456 fused_ordering(832) 00:18:54.456 fused_ordering(833) 00:18:54.456 fused_ordering(834) 00:18:54.456 fused_ordering(835) 00:18:54.456 fused_ordering(836) 00:18:54.456 fused_ordering(837) 00:18:54.456 fused_ordering(838) 00:18:54.456 fused_ordering(839) 00:18:54.456 fused_ordering(840) 00:18:54.456 fused_ordering(841) 00:18:54.456 fused_ordering(842) 00:18:54.456 fused_ordering(843) 00:18:54.456 fused_ordering(844) 00:18:54.456 fused_ordering(845) 00:18:54.456 fused_ordering(846) 00:18:54.456 fused_ordering(847) 00:18:54.456 fused_ordering(848) 00:18:54.456 fused_ordering(849) 00:18:54.456 fused_ordering(850) 00:18:54.456 fused_ordering(851) 00:18:54.456 fused_ordering(852) 00:18:54.456 fused_ordering(853) 00:18:54.456 fused_ordering(854) 00:18:54.456 fused_ordering(855) 00:18:54.456 fused_ordering(856) 00:18:54.456 fused_ordering(857) 00:18:54.456 fused_ordering(858) 00:18:54.456 fused_ordering(859) 00:18:54.456 fused_ordering(860) 00:18:54.456 fused_ordering(861) 00:18:54.456 fused_ordering(862) 00:18:54.456 fused_ordering(863) 00:18:54.456 fused_ordering(864) 00:18:54.456 fused_ordering(865) 00:18:54.456 fused_ordering(866) 00:18:54.456 fused_ordering(867) 00:18:54.456 fused_ordering(868) 00:18:54.456 fused_ordering(869) 00:18:54.456 fused_ordering(870) 00:18:54.456 fused_ordering(871) 00:18:54.456 fused_ordering(872) 00:18:54.456 fused_ordering(873) 00:18:54.456 fused_ordering(874) 00:18:54.456 fused_ordering(875) 00:18:54.456 fused_ordering(876) 00:18:54.456 fused_ordering(877) 00:18:54.456 fused_ordering(878) 00:18:54.456 fused_ordering(879) 00:18:54.456 fused_ordering(880) 00:18:54.456 fused_ordering(881) 00:18:54.456 fused_ordering(882) 00:18:54.456 fused_ordering(883) 00:18:54.456 fused_ordering(884) 00:18:54.456 fused_ordering(885) 00:18:54.456 fused_ordering(886) 00:18:54.456 fused_ordering(887) 00:18:54.456 fused_ordering(888) 00:18:54.456 fused_ordering(889) 00:18:54.456 fused_ordering(890) 00:18:54.456 fused_ordering(891) 00:18:54.456 fused_ordering(892) 00:18:54.456 fused_ordering(893) 00:18:54.456 fused_ordering(894) 00:18:54.456 fused_ordering(895) 00:18:54.456 fused_ordering(896) 00:18:54.456 fused_ordering(897) 00:18:54.456 fused_ordering(898) 00:18:54.456 fused_ordering(899) 00:18:54.456 fused_ordering(900) 00:18:54.456 fused_ordering(901) 00:18:54.456 fused_ordering(902) 00:18:54.456 fused_ordering(903) 00:18:54.456 fused_ordering(904) 00:18:54.456 fused_ordering(905) 00:18:54.456 fused_ordering(906) 00:18:54.456 fused_ordering(907) 00:18:54.456 fused_ordering(908) 00:18:54.456 fused_ordering(909) 00:18:54.456 fused_ordering(910) 00:18:54.456 fused_ordering(911) 00:18:54.456 fused_ordering(912) 00:18:54.456 fused_ordering(913) 00:18:54.456 fused_ordering(914) 00:18:54.456 fused_ordering(915) 00:18:54.456 fused_ordering(916) 00:18:54.456 fused_ordering(917) 00:18:54.456 fused_ordering(918) 00:18:54.456 fused_ordering(919) 00:18:54.456 fused_ordering(920) 00:18:54.456 fused_ordering(921) 00:18:54.456 fused_ordering(922) 00:18:54.456 fused_ordering(923) 00:18:54.456 fused_ordering(924) 00:18:54.456 fused_ordering(925) 00:18:54.456 fused_ordering(926) 00:18:54.456 fused_ordering(927) 00:18:54.456 fused_ordering(928) 00:18:54.456 fused_ordering(929) 00:18:54.456 fused_ordering(930) 00:18:54.456 fused_ordering(931) 00:18:54.456 fused_ordering(932) 00:18:54.456 fused_ordering(933) 00:18:54.456 fused_ordering(934) 00:18:54.456 fused_ordering(935) 00:18:54.456 fused_ordering(936) 00:18:54.456 fused_ordering(937) 00:18:54.456 fused_ordering(938) 00:18:54.456 fused_ordering(939) 00:18:54.456 fused_ordering(940) 00:18:54.456 fused_ordering(941) 00:18:54.456 fused_ordering(942) 00:18:54.456 fused_ordering(943) 00:18:54.456 fused_ordering(944) 00:18:54.456 fused_ordering(945) 00:18:54.456 fused_ordering(946) 00:18:54.456 fused_ordering(947) 00:18:54.456 fused_ordering(948) 00:18:54.456 fused_ordering(949) 00:18:54.456 fused_ordering(950) 00:18:54.456 fused_ordering(951) 00:18:54.456 fused_ordering(952) 00:18:54.456 fused_ordering(953) 00:18:54.456 fused_ordering(954) 00:18:54.456 fused_ordering(955) 00:18:54.456 fused_ordering(956) 00:18:54.456 fused_ordering(957) 00:18:54.456 fused_ordering(958) 00:18:54.456 fused_ordering(959) 00:18:54.456 fused_ordering(960) 00:18:54.456 fused_ordering(961) 00:18:54.456 fused_ordering(962) 00:18:54.456 fused_ordering(963) 00:18:54.456 fused_ordering(964) 00:18:54.456 fused_ordering(965) 00:18:54.456 fused_ordering(966) 00:18:54.456 fused_ordering(967) 00:18:54.456 fused_ordering(968) 00:18:54.456 fused_ordering(969) 00:18:54.456 fused_ordering(970) 00:18:54.456 fused_ordering(971) 00:18:54.456 fused_ordering(972) 00:18:54.456 fused_ordering(973) 00:18:54.456 fused_ordering(974) 00:18:54.456 fused_ordering(975) 00:18:54.456 fused_ordering(976) 00:18:54.456 fused_ordering(977) 00:18:54.456 fused_ordering(978) 00:18:54.456 fused_ordering(979) 00:18:54.456 fused_ordering(980) 00:18:54.456 fused_ordering(981) 00:18:54.456 fused_ordering(982) 00:18:54.456 fused_ordering(983) 00:18:54.456 fused_ordering(984) 00:18:54.456 fused_ordering(985) 00:18:54.456 fused_ordering(986) 00:18:54.456 fused_ordering(987) 00:18:54.456 fused_ordering(988) 00:18:54.456 fused_ordering(989) 00:18:54.456 fused_ordering(990) 00:18:54.456 fused_ordering(991) 00:18:54.456 fused_ordering(992) 00:18:54.456 fused_ordering(993) 00:18:54.456 fused_ordering(994) 00:18:54.456 fused_ordering(995) 00:18:54.456 fused_ordering(996) 00:18:54.456 fused_ordering(997) 00:18:54.456 fused_ordering(998) 00:18:54.456 fused_ordering(999) 00:18:54.456 fused_ordering(1000) 00:18:54.456 fused_ordering(1001) 00:18:54.456 fused_ordering(1002) 00:18:54.456 fused_ordering(1003) 00:18:54.456 fused_ordering(1004) 00:18:54.456 fused_ordering(1005) 00:18:54.456 fused_ordering(1006) 00:18:54.456 fused_ordering(1007) 00:18:54.456 fused_ordering(1008) 00:18:54.456 fused_ordering(1009) 00:18:54.456 fused_ordering(1010) 00:18:54.456 fused_ordering(1011) 00:18:54.456 fused_ordering(1012) 00:18:54.456 fused_ordering(1013) 00:18:54.456 fused_ordering(1014) 00:18:54.456 fused_ordering(1015) 00:18:54.456 fused_ordering(1016) 00:18:54.456 fused_ordering(1017) 00:18:54.456 fused_ordering(1018) 00:18:54.456 fused_ordering(1019) 00:18:54.456 fused_ordering(1020) 00:18:54.456 fused_ordering(1021) 00:18:54.456 fused_ordering(1022) 00:18:54.456 fused_ordering(1023) 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:54.456 rmmod nvme_tcp 00:18:54.456 rmmod nvme_fabrics 00:18:54.456 rmmod nvme_keyring 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3641305 ']' 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3641305 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3641305 ']' 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3641305 00:18:54.456 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:54.457 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.457 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3641305 00:18:54.457 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.457 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.457 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3641305' 00:18:54.457 killing process with pid 3641305 00:18:54.457 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3641305 00:18:54.457 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3641305 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.391 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:57.923 00:18:57.923 real 0m11.411s 00:18:57.923 user 0m6.614s 00:18:57.923 sys 0m5.289s 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:57.923 ************************************ 00:18:57.923 END TEST nvmf_fused_ordering 00:18:57.923 ************************************ 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.923 ************************************ 00:18:57.923 START TEST nvmf_ns_masking 00:18:57.923 ************************************ 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:57.923 * Looking for test storage... 00:18:57.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.923 --rc genhtml_branch_coverage=1 00:18:57.923 --rc genhtml_function_coverage=1 00:18:57.923 --rc genhtml_legend=1 00:18:57.923 --rc geninfo_all_blocks=1 00:18:57.923 --rc geninfo_unexecuted_blocks=1 00:18:57.923 00:18:57.923 ' 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.923 --rc genhtml_branch_coverage=1 00:18:57.923 --rc genhtml_function_coverage=1 00:18:57.923 --rc genhtml_legend=1 00:18:57.923 --rc geninfo_all_blocks=1 00:18:57.923 --rc geninfo_unexecuted_blocks=1 00:18:57.923 00:18:57.923 ' 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.923 --rc genhtml_branch_coverage=1 00:18:57.923 --rc genhtml_function_coverage=1 00:18:57.923 --rc genhtml_legend=1 00:18:57.923 --rc geninfo_all_blocks=1 00:18:57.923 --rc geninfo_unexecuted_blocks=1 00:18:57.923 00:18:57.923 ' 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.923 --rc genhtml_branch_coverage=1 00:18:57.923 --rc genhtml_function_coverage=1 00:18:57.923 --rc genhtml_legend=1 00:18:57.923 --rc geninfo_all_blocks=1 00:18:57.923 --rc geninfo_unexecuted_blocks=1 00:18:57.923 00:18:57.923 ' 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.923 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=811d4700-8775-408e-ab1d-9539549ef8dd 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=569c0459-17f7-41df-90ec-e0ed4dea7c86 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0ea86ff4-c0d8-477e-8160-4187892965c0 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:57.924 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:03.181 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:03.181 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:03.181 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:03.182 Found net devices under 0000:af:00.0: cvl_0_0 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:03.182 Found net devices under 0000:af:00.1: cvl_0_1 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:03.182 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:03.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:19:03.441 00:19:03.441 --- 10.0.0.2 ping statistics --- 00:19:03.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.441 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:19:03.441 00:19:03.441 --- 10.0.0.1 ping statistics --- 00:19:03.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.441 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3645458 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3645458 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3645458 ']' 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.441 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.442 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.442 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:03.442 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.442 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:03.442 [2024-12-10 12:22:10.203433] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:03.442 [2024-12-10 12:22:10.203524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.701 [2024-12-10 12:22:10.320231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.701 [2024-12-10 12:22:10.427274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.701 [2024-12-10 12:22:10.427321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.701 [2024-12-10 12:22:10.427332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.701 [2024-12-10 12:22:10.427343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.701 [2024-12-10 12:22:10.427351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.701 [2024-12-10 12:22:10.428714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.268 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.268 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:04.268 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.268 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.268 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:04.268 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.268 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:04.526 [2024-12-10 12:22:11.205210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.526 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:04.526 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:04.527 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:04.784 Malloc1 00:19:04.784 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:05.043 Malloc2 00:19:05.043 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:05.301 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:05.301 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.561 [2024-12-10 12:22:12.267821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.561 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:05.561 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0ea86ff4-c0d8-477e-8160-4187892965c0 -a 10.0.0.2 -s 4420 -i 4 00:19:05.819 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:05.819 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:05.819 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.819 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:05.819 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:07.721 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:07.721 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:07.721 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:07.722 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:07.722 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.722 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:07.722 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:07.722 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.980 [ 0]:0x1 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c721fa647a8e4b7b994a6a69c31e2359 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c721fa647a8e4b7b994a6a69c31e2359 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.980 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:08.238 [ 0]:0x1 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c721fa647a8e4b7b994a6a69c31e2359 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c721fa647a8e4b7b994a6a69c31e2359 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.238 [ 1]:0x2 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7491b474342e46198d26418f7d299ec8 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7491b474342e46198d26418f7d299ec8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:08.238 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:08.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.496 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:08.496 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:08.753 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:08.753 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0ea86ff4-c0d8-477e-8160-4187892965c0 -a 10.0.0.2 -s 4420 -i 4 00:19:09.011 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:09.011 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:09.011 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.011 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:09.011 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:09.011 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.910 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.168 [ 0]:0x2 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7491b474342e46198d26418f7d299ec8 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7491b474342e46198d26418f7d299ec8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.168 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.426 [ 0]:0x1 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c721fa647a8e4b7b994a6a69c31e2359 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c721fa647a8e4b7b994a6a69c31e2359 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.426 [ 1]:0x2 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7491b474342e46198d26418f7d299ec8 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7491b474342e46198d26418f7d299ec8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.426 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.684 [ 0]:0x2 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.684 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.942 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7491b474342e46198d26418f7d299ec8 00:19:11.942 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7491b474342e46198d26418f7d299ec8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.942 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:11.942 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.942 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:11.942 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:11.942 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0ea86ff4-c0d8-477e-8160-4187892965c0 -a 10.0.0.2 -s 4420 -i 4 00:19:12.200 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:12.200 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:12.200 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.200 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:12.200 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:12.200 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:14.738 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:14.738 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:14.738 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:14.738 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:14.738 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:14.738 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:14.738 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:14.738 [ 0]:0x1 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c721fa647a8e4b7b994a6a69c31e2359 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c721fa647a8e4b7b994a6a69c31e2359 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:14.738 [ 1]:0x2 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7491b474342e46198d26418f7d299ec8 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7491b474342e46198d26418f7d299ec8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:14.738 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:14.996 [ 0]:0x2 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7491b474342e46198d26418f7d299ec8 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7491b474342e46198d26418f7d299ec8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:14.996 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:14.996 [2024-12-10 12:22:21.810779] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:14.996 request: 00:19:14.996 { 00:19:14.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.996 "nsid": 2, 00:19:14.996 "host": "nqn.2016-06.io.spdk:host1", 00:19:14.996 "method": "nvmf_ns_remove_host", 00:19:14.996 "req_id": 1 00:19:14.996 } 00:19:14.996 Got JSON-RPC error response 00:19:14.996 response: 00:19:14.996 { 00:19:14.996 "code": -32602, 00:19:14.996 "message": "Invalid parameters" 00:19:14.996 } 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:15.254 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:15.255 [ 0]:0x2 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7491b474342e46198d26418f7d299ec8 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7491b474342e46198d26418f7d299ec8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3647414 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3647414 /var/tmp/host.sock 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3647414 ']' 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:15.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.255 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:15.255 [2024-12-10 12:22:22.073923] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:15.255 [2024-12-10 12:22:22.074014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647414 ] 00:19:15.513 [2024-12-10 12:22:22.187239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.513 [2024-12-10 12:22:22.297866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.447 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.447 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:16.447 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.704 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:16.704 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 811d4700-8775-408e-ab1d-9539549ef8dd 00:19:16.704 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:16.704 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 811D47008775408EAB1D9539549EF8DD -i 00:19:16.963 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 569c0459-17f7-41df-90ec-e0ed4dea7c86 00:19:16.963 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:16.963 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 569C045917F741DF90ECE0ED4DEA7C86 -i 00:19:17.220 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:17.478 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:17.478 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:17.478 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:18.043 nvme0n1 00:19:18.043 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:18.043 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:18.301 nvme1n2 00:19:18.301 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:18.301 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:18.301 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:18.301 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:18.301 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:18.559 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:18.559 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:18.559 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:18.559 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:18.559 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 811d4700-8775-408e-ab1d-9539549ef8dd == \8\1\1\d\4\7\0\0\-\8\7\7\5\-\4\0\8\e\-\a\b\1\d\-\9\5\3\9\5\4\9\e\f\8\d\d ]] 00:19:18.559 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:18.559 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:18.559 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:18.817 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 569c0459-17f7-41df-90ec-e0ed4dea7c86 == \5\6\9\c\0\4\5\9\-\1\7\f\7\-\4\1\d\f\-\9\0\e\c\-\e\0\e\d\4\d\e\a\7\c\8\6 ]] 00:19:18.817 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.075 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 811d4700-8775-408e-ab1d-9539549ef8dd 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 811D47008775408EAB1D9539549EF8DD 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 811D47008775408EAB1D9539549EF8DD 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:19.333 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 811D47008775408EAB1D9539549EF8DD 00:19:19.333 [2024-12-10 12:22:26.091660] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:19.333 [2024-12-10 12:22:26.091698] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:19.333 [2024-12-10 12:22:26.091717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:19.333 request: 00:19:19.333 { 00:19:19.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.333 "namespace": { 00:19:19.333 "bdev_name": "invalid", 00:19:19.333 "nsid": 1, 00:19:19.333 "nguid": "811D47008775408EAB1D9539549EF8DD", 00:19:19.333 "no_auto_visible": false, 00:19:19.333 "hide_metadata": false 00:19:19.333 }, 00:19:19.333 "method": "nvmf_subsystem_add_ns", 00:19:19.333 "req_id": 1 00:19:19.333 } 00:19:19.333 Got JSON-RPC error response 00:19:19.333 response: 00:19:19.333 { 00:19:19.333 "code": -32602, 00:19:19.333 "message": "Invalid parameters" 00:19:19.333 } 00:19:19.333 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:19.333 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.333 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.333 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.333 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 811d4700-8775-408e-ab1d-9539549ef8dd 00:19:19.333 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:19.333 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 811D47008775408EAB1D9539549EF8DD -i 00:19:19.591 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:21.622 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:21.622 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:21.622 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3647414 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3647414 ']' 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3647414 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3647414 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3647414' 00:19:21.879 killing process with pid 3647414 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3647414 00:19:21.879 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3647414 00:19:24.405 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.405 rmmod nvme_tcp 00:19:24.405 rmmod nvme_fabrics 00:19:24.405 rmmod nvme_keyring 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3645458 ']' 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3645458 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3645458 ']' 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3645458 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3645458 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3645458' 00:19:24.405 killing process with pid 3645458 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3645458 00:19:24.405 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3645458 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.777 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:28.307 00:19:28.307 real 0m30.305s 00:19:28.307 user 0m38.182s 00:19:28.307 sys 0m6.834s 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:28.307 ************************************ 00:19:28.307 END TEST nvmf_ns_masking 00:19:28.307 ************************************ 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.307 ************************************ 00:19:28.307 START TEST nvmf_nvme_cli 00:19:28.307 ************************************ 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:28.307 * Looking for test storage... 00:19:28.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.307 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:28.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.308 --rc genhtml_branch_coverage=1 00:19:28.308 --rc genhtml_function_coverage=1 00:19:28.308 --rc genhtml_legend=1 00:19:28.308 --rc geninfo_all_blocks=1 00:19:28.308 --rc geninfo_unexecuted_blocks=1 00:19:28.308 00:19:28.308 ' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:28.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.308 --rc genhtml_branch_coverage=1 00:19:28.308 --rc genhtml_function_coverage=1 00:19:28.308 --rc genhtml_legend=1 00:19:28.308 --rc geninfo_all_blocks=1 00:19:28.308 --rc geninfo_unexecuted_blocks=1 00:19:28.308 00:19:28.308 ' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:28.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.308 --rc genhtml_branch_coverage=1 00:19:28.308 --rc genhtml_function_coverage=1 00:19:28.308 --rc genhtml_legend=1 00:19:28.308 --rc geninfo_all_blocks=1 00:19:28.308 --rc geninfo_unexecuted_blocks=1 00:19:28.308 00:19:28.308 ' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:28.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.308 --rc genhtml_branch_coverage=1 00:19:28.308 --rc genhtml_function_coverage=1 00:19:28.308 --rc genhtml_legend=1 00:19:28.308 --rc geninfo_all_blocks=1 00:19:28.308 --rc geninfo_unexecuted_blocks=1 00:19:28.308 00:19:28.308 ' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:28.308 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:33.571 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:33.571 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.571 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:33.572 Found net devices under 0000:af:00.0: cvl_0_0 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:33.572 Found net devices under 0000:af:00.1: cvl_0_1 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:33.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:19:33.830 00:19:33.830 --- 10.0.0.2 ping statistics --- 00:19:33.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.830 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:33.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:19:33.830 00:19:33.830 --- 10.0.0.1 ping statistics --- 00:19:33.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.830 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3652719 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3652719 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3652719 ']' 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:34.088 [2024-12-10 12:22:40.725775] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:34.088 [2024-12-10 12:22:40.725876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.088 [2024-12-10 12:22:40.845023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.345 [2024-12-10 12:22:40.957525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.345 [2024-12-10 12:22:40.957571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.345 [2024-12-10 12:22:40.957582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.345 [2024-12-10 12:22:40.957592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.345 [2024-12-10 12:22:40.957601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.345 [2024-12-10 12:22:40.960156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.345 [2024-12-10 12:22:40.960230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.345 [2024-12-10 12:22:40.960267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.346 [2024-12-10 12:22:40.960275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:34.911 [2024-12-10 12:22:41.590126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:34.911 Malloc0 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.911 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:35.169 Malloc1 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:35.169 [2024-12-10 12:22:41.809931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.169 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:19:35.427 00:19:35.427 Discovery Log Number of Records 2, Generation counter 2 00:19:35.427 =====Discovery Log Entry 0====== 00:19:35.427 trtype: tcp 00:19:35.427 adrfam: ipv4 00:19:35.427 subtype: current discovery subsystem 00:19:35.427 treq: not required 00:19:35.427 portid: 0 00:19:35.427 trsvcid: 4420 00:19:35.427 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:35.427 traddr: 10.0.0.2 00:19:35.427 eflags: explicit discovery connections, duplicate discovery information 00:19:35.427 sectype: none 00:19:35.427 =====Discovery Log Entry 1====== 00:19:35.427 trtype: tcp 00:19:35.427 adrfam: ipv4 00:19:35.427 subtype: nvme subsystem 00:19:35.427 treq: not required 00:19:35.427 portid: 0 00:19:35.427 trsvcid: 4420 00:19:35.427 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:35.427 traddr: 10.0.0.2 00:19:35.427 eflags: none 00:19:35.427 sectype: none 00:19:35.427 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:35.427 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:35.427 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:35.427 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.427 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:35.427 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:35.427 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.427 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:35.427 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.427 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:35.427 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:36.798 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:36.798 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:36.798 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.798 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:36.798 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:36.798 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:38.694 /dev/nvme0n2 ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:38.694 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:38.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.952 rmmod nvme_tcp 00:19:38.952 rmmod nvme_fabrics 00:19:38.952 rmmod nvme_keyring 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3652719 ']' 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3652719 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3652719 ']' 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3652719 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3652719 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3652719' 00:19:38.952 killing process with pid 3652719 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3652719 00:19:38.952 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3652719 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.850 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:42.753 00:19:42.753 real 0m14.682s 00:19:42.753 user 0m26.034s 00:19:42.753 sys 0m5.031s 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 ************************************ 00:19:42.753 END TEST nvmf_nvme_cli 00:19:42.753 ************************************ 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 ************************************ 00:19:42.753 START TEST nvmf_auth_target 00:19:42.753 ************************************ 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:42.753 * Looking for test storage... 00:19:42.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:42.753 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:43.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.011 --rc genhtml_branch_coverage=1 00:19:43.011 --rc genhtml_function_coverage=1 00:19:43.011 --rc genhtml_legend=1 00:19:43.011 --rc geninfo_all_blocks=1 00:19:43.011 --rc geninfo_unexecuted_blocks=1 00:19:43.011 00:19:43.011 ' 00:19:43.011 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:43.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.011 --rc genhtml_branch_coverage=1 00:19:43.011 --rc genhtml_function_coverage=1 00:19:43.012 --rc genhtml_legend=1 00:19:43.012 --rc geninfo_all_blocks=1 00:19:43.012 --rc geninfo_unexecuted_blocks=1 00:19:43.012 00:19:43.012 ' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:43.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.012 --rc genhtml_branch_coverage=1 00:19:43.012 --rc genhtml_function_coverage=1 00:19:43.012 --rc genhtml_legend=1 00:19:43.012 --rc geninfo_all_blocks=1 00:19:43.012 --rc geninfo_unexecuted_blocks=1 00:19:43.012 00:19:43.012 ' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:43.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.012 --rc genhtml_branch_coverage=1 00:19:43.012 --rc genhtml_function_coverage=1 00:19:43.012 --rc genhtml_legend=1 00:19:43.012 --rc geninfo_all_blocks=1 00:19:43.012 --rc geninfo_unexecuted_blocks=1 00:19:43.012 00:19:43.012 ' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:43.012 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:48.278 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:48.278 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.278 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:48.278 Found net devices under 0000:af:00.0: cvl_0_0 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:48.279 Found net devices under 0000:af:00.1: cvl_0_1 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.279 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.279 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:48.279 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:48.279 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.279 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:48.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:19:48.536 00:19:48.536 --- 10.0.0.2 ping statistics --- 00:19:48.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.536 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:19:48.536 00:19:48.536 --- 10.0.0.1 ping statistics --- 00:19:48.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.536 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3657127 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3657127 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3657127 ']' 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.536 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3657364 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:49.469 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b914b2f4f8ac75f9277515f60f2afb2e151667647466910d 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vlT 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b914b2f4f8ac75f9277515f60f2afb2e151667647466910d 0 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b914b2f4f8ac75f9277515f60f2afb2e151667647466910d 0 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b914b2f4f8ac75f9277515f60f2afb2e151667647466910d 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vlT 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vlT 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vlT 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=07ef0f3738176d08916b023013e90430cd459ea00f6efb4fabb9ffd7b3a23e5c 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WI4 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 07ef0f3738176d08916b023013e90430cd459ea00f6efb4fabb9ffd7b3a23e5c 3 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 07ef0f3738176d08916b023013e90430cd459ea00f6efb4fabb9ffd7b3a23e5c 3 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=07ef0f3738176d08916b023013e90430cd459ea00f6efb4fabb9ffd7b3a23e5c 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:49.470 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WI4 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WI4 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.WI4 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb994efab3611cee7699db26359baa62 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OvN 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb994efab3611cee7699db26359baa62 1 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb994efab3611cee7699db26359baa62 1 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb994efab3611cee7699db26359baa62 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:49.728 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OvN 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OvN 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.OvN 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8efbca237958a2bd2c2b1922e948c8bf40dcf17f1d0a3cea 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.k0K 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8efbca237958a2bd2c2b1922e948c8bf40dcf17f1d0a3cea 2 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8efbca237958a2bd2c2b1922e948c8bf40dcf17f1d0a3cea 2 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8efbca237958a2bd2c2b1922e948c8bf40dcf17f1d0a3cea 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.k0K 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.k0K 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.k0K 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c55be57a616fae14aea8ada16d3e5e6afd3212610b38d713 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WC6 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c55be57a616fae14aea8ada16d3e5e6afd3212610b38d713 2 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c55be57a616fae14aea8ada16d3e5e6afd3212610b38d713 2 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c55be57a616fae14aea8ada16d3e5e6afd3212610b38d713 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WC6 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WC6 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.WC6 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=79f648e756c528005339c355d0869ac5 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NKI 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 79f648e756c528005339c355d0869ac5 1 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 79f648e756c528005339c355d0869ac5 1 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=79f648e756c528005339c355d0869ac5 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:49.729 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NKI 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NKI 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.NKI 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0c27d85d1cc9c8ebbca06dbf4782fa815b7139a93267d3685b79694e9101e21c 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3gj 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0c27d85d1cc9c8ebbca06dbf4782fa815b7139a93267d3685b79694e9101e21c 3 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0c27d85d1cc9c8ebbca06dbf4782fa815b7139a93267d3685b79694e9101e21c 3 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0c27d85d1cc9c8ebbca06dbf4782fa815b7139a93267d3685b79694e9101e21c 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3gj 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3gj 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.3gj 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3657127 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3657127 ']' 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.987 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.988 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:49.988 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3657364 /var/tmp/host.sock 00:19:49.988 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3657364 ']' 00:19:49.988 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:49.988 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.988 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:49.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:49.988 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.988 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vlT 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.554 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vlT 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vlT 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.WI4 ]] 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WI4 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WI4 00:19:50.812 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WI4 00:19:51.070 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:51.070 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OvN 00:19:51.070 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.070 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.070 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.070 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.OvN 00:19:51.070 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.OvN 00:19:51.326 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.k0K ]] 00:19:51.326 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k0K 00:19:51.326 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.326 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.326 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.326 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k0K 00:19:51.326 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k0K 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.WC6 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.WC6 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.WC6 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.NKI ]] 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NKI 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.583 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NKI 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NKI 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3gj 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.3gj 00:19:51.841 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.3gj 00:19:52.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:52.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:52.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.357 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.614 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.614 { 00:19:52.614 "cntlid": 1, 00:19:52.614 "qid": 0, 00:19:52.614 "state": "enabled", 00:19:52.614 "thread": "nvmf_tgt_poll_group_000", 00:19:52.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:52.614 "listen_address": { 00:19:52.614 "trtype": "TCP", 00:19:52.614 "adrfam": "IPv4", 00:19:52.614 "traddr": "10.0.0.2", 00:19:52.614 "trsvcid": "4420" 00:19:52.614 }, 00:19:52.614 "peer_address": { 00:19:52.614 "trtype": "TCP", 00:19:52.614 "adrfam": "IPv4", 00:19:52.614 "traddr": "10.0.0.1", 00:19:52.614 "trsvcid": "49082" 00:19:52.614 }, 00:19:52.614 "auth": { 00:19:52.614 "state": "completed", 00:19:52.614 "digest": "sha256", 00:19:52.614 "dhgroup": "null" 00:19:52.614 } 00:19:52.614 } 00:19:52.614 ]' 00:19:52.614 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.872 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.872 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.872 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.872 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.872 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.872 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.872 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.129 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:19:53.129 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:19:53.691 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.691 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.691 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.691 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.691 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.691 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.691 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.691 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.948 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:53.948 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.948 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.948 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.948 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.949 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.949 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.949 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.949 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.949 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.949 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.949 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.949 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.949 00:19:54.205 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.205 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.206 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.206 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.206 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.206 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.206 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.206 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.206 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.206 { 00:19:54.206 "cntlid": 3, 00:19:54.206 "qid": 0, 00:19:54.206 "state": "enabled", 00:19:54.206 "thread": "nvmf_tgt_poll_group_000", 00:19:54.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:54.206 "listen_address": { 00:19:54.206 "trtype": "TCP", 00:19:54.206 "adrfam": "IPv4", 00:19:54.206 "traddr": "10.0.0.2", 00:19:54.206 "trsvcid": "4420" 00:19:54.206 }, 00:19:54.206 "peer_address": { 00:19:54.206 "trtype": "TCP", 00:19:54.206 "adrfam": "IPv4", 00:19:54.206 "traddr": "10.0.0.1", 00:19:54.206 "trsvcid": "49096" 00:19:54.206 }, 00:19:54.206 "auth": { 00:19:54.206 "state": "completed", 00:19:54.206 "digest": "sha256", 00:19:54.206 "dhgroup": "null" 00:19:54.206 } 00:19:54.206 } 00:19:54.206 ]' 00:19:54.206 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.206 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.206 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.463 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.463 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.463 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.463 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.463 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.463 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:19:54.463 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:19:55.027 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.027 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.027 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.027 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.285 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.285 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.285 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.285 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.285 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.542 00:19:55.542 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.542 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.542 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.800 { 00:19:55.800 "cntlid": 5, 00:19:55.800 "qid": 0, 00:19:55.800 "state": "enabled", 00:19:55.800 "thread": "nvmf_tgt_poll_group_000", 00:19:55.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:55.800 "listen_address": { 00:19:55.800 "trtype": "TCP", 00:19:55.800 "adrfam": "IPv4", 00:19:55.800 "traddr": "10.0.0.2", 00:19:55.800 "trsvcid": "4420" 00:19:55.800 }, 00:19:55.800 "peer_address": { 00:19:55.800 "trtype": "TCP", 00:19:55.800 "adrfam": "IPv4", 00:19:55.800 "traddr": "10.0.0.1", 00:19:55.800 "trsvcid": "49126" 00:19:55.800 }, 00:19:55.800 "auth": { 00:19:55.800 "state": "completed", 00:19:55.800 "digest": "sha256", 00:19:55.800 "dhgroup": "null" 00:19:55.800 } 00:19:55.800 } 00:19:55.800 ]' 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:55.800 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.058 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.058 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.058 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.058 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:19:56.058 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:19:56.623 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.623 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.623 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.623 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.623 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.623 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.623 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:56.623 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.882 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.141 00:19:57.141 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.141 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.141 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.400 { 00:19:57.400 "cntlid": 7, 00:19:57.400 "qid": 0, 00:19:57.400 "state": "enabled", 00:19:57.400 "thread": "nvmf_tgt_poll_group_000", 00:19:57.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:57.400 "listen_address": { 00:19:57.400 "trtype": "TCP", 00:19:57.400 "adrfam": "IPv4", 00:19:57.400 "traddr": "10.0.0.2", 00:19:57.400 "trsvcid": "4420" 00:19:57.400 }, 00:19:57.400 "peer_address": { 00:19:57.400 "trtype": "TCP", 00:19:57.400 "adrfam": "IPv4", 00:19:57.400 "traddr": "10.0.0.1", 00:19:57.400 "trsvcid": "49142" 00:19:57.400 }, 00:19:57.400 "auth": { 00:19:57.400 "state": "completed", 00:19:57.400 "digest": "sha256", 00:19:57.400 "dhgroup": "null" 00:19:57.400 } 00:19:57.400 } 00:19:57.400 ]' 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.400 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.659 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:19:57.659 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.226 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.486 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.744 00:19:58.744 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.744 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.744 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.002 { 00:19:59.002 "cntlid": 9, 00:19:59.002 "qid": 0, 00:19:59.002 "state": "enabled", 00:19:59.002 "thread": "nvmf_tgt_poll_group_000", 00:19:59.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.002 "listen_address": { 00:19:59.002 "trtype": "TCP", 00:19:59.002 "adrfam": "IPv4", 00:19:59.002 "traddr": "10.0.0.2", 00:19:59.002 "trsvcid": "4420" 00:19:59.002 }, 00:19:59.002 "peer_address": { 00:19:59.002 "trtype": "TCP", 00:19:59.002 "adrfam": "IPv4", 00:19:59.002 "traddr": "10.0.0.1", 00:19:59.002 "trsvcid": "49166" 00:19:59.002 }, 00:19:59.002 "auth": { 00:19:59.002 "state": "completed", 00:19:59.002 "digest": "sha256", 00:19:59.002 "dhgroup": "ffdhe2048" 00:19:59.002 } 00:19:59.002 } 00:19:59.002 ]' 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.002 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.260 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:19:59.260 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:19:59.828 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.828 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:59.828 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.828 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.828 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.828 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.828 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.828 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.087 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.345 00:20:00.345 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.345 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.345 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.603 { 00:20:00.603 "cntlid": 11, 00:20:00.603 "qid": 0, 00:20:00.603 "state": "enabled", 00:20:00.603 "thread": "nvmf_tgt_poll_group_000", 00:20:00.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:00.603 "listen_address": { 00:20:00.603 "trtype": "TCP", 00:20:00.603 "adrfam": "IPv4", 00:20:00.603 "traddr": "10.0.0.2", 00:20:00.603 "trsvcid": "4420" 00:20:00.603 }, 00:20:00.603 "peer_address": { 00:20:00.603 "trtype": "TCP", 00:20:00.603 "adrfam": "IPv4", 00:20:00.603 "traddr": "10.0.0.1", 00:20:00.603 "trsvcid": "37280" 00:20:00.603 }, 00:20:00.603 "auth": { 00:20:00.603 "state": "completed", 00:20:00.603 "digest": "sha256", 00:20:00.603 "dhgroup": "ffdhe2048" 00:20:00.603 } 00:20:00.603 } 00:20:00.603 ]' 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.603 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.862 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:00.862 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:01.429 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.429 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.429 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.429 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.429 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.429 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.429 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:01.429 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.711 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.970 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.970 { 00:20:01.970 "cntlid": 13, 00:20:01.970 "qid": 0, 00:20:01.970 "state": "enabled", 00:20:01.970 "thread": "nvmf_tgt_poll_group_000", 00:20:01.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:01.970 "listen_address": { 00:20:01.970 "trtype": "TCP", 00:20:01.970 "adrfam": "IPv4", 00:20:01.970 "traddr": "10.0.0.2", 00:20:01.970 "trsvcid": "4420" 00:20:01.970 }, 00:20:01.970 "peer_address": { 00:20:01.970 "trtype": "TCP", 00:20:01.970 "adrfam": "IPv4", 00:20:01.970 "traddr": "10.0.0.1", 00:20:01.970 "trsvcid": "37310" 00:20:01.970 }, 00:20:01.970 "auth": { 00:20:01.970 "state": "completed", 00:20:01.970 "digest": "sha256", 00:20:01.970 "dhgroup": "ffdhe2048" 00:20:01.970 } 00:20:01.970 } 00:20:01.970 ]' 00:20:01.970 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.229 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.229 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.229 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:02.229 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.229 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.229 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.229 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.487 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:02.487 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.055 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.313 00:20:03.313 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.313 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.313 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.572 { 00:20:03.572 "cntlid": 15, 00:20:03.572 "qid": 0, 00:20:03.572 "state": "enabled", 00:20:03.572 "thread": "nvmf_tgt_poll_group_000", 00:20:03.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.572 "listen_address": { 00:20:03.572 "trtype": "TCP", 00:20:03.572 "adrfam": "IPv4", 00:20:03.572 "traddr": "10.0.0.2", 00:20:03.572 "trsvcid": "4420" 00:20:03.572 }, 00:20:03.572 "peer_address": { 00:20:03.572 "trtype": "TCP", 00:20:03.572 "adrfam": "IPv4", 00:20:03.572 "traddr": "10.0.0.1", 00:20:03.572 "trsvcid": "37340" 00:20:03.572 }, 00:20:03.572 "auth": { 00:20:03.572 "state": "completed", 00:20:03.572 "digest": "sha256", 00:20:03.572 "dhgroup": "ffdhe2048" 00:20:03.572 } 00:20:03.572 } 00:20:03.572 ]' 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.572 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.831 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.831 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.831 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.831 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:03.831 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.398 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.657 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.916 00:20:04.916 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.916 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.916 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.175 { 00:20:05.175 "cntlid": 17, 00:20:05.175 "qid": 0, 00:20:05.175 "state": "enabled", 00:20:05.175 "thread": "nvmf_tgt_poll_group_000", 00:20:05.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.175 "listen_address": { 00:20:05.175 "trtype": "TCP", 00:20:05.175 "adrfam": "IPv4", 00:20:05.175 "traddr": "10.0.0.2", 00:20:05.175 "trsvcid": "4420" 00:20:05.175 }, 00:20:05.175 "peer_address": { 00:20:05.175 "trtype": "TCP", 00:20:05.175 "adrfam": "IPv4", 00:20:05.175 "traddr": "10.0.0.1", 00:20:05.175 "trsvcid": "37362" 00:20:05.175 }, 00:20:05.175 "auth": { 00:20:05.175 "state": "completed", 00:20:05.175 "digest": "sha256", 00:20:05.175 "dhgroup": "ffdhe3072" 00:20:05.175 } 00:20:05.175 } 00:20:05.175 ]' 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.175 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.434 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:05.434 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:06.001 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.001 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.001 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.001 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.001 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.001 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.001 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.001 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.260 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.519 00:20:06.519 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.519 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.519 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.777 { 00:20:06.777 "cntlid": 19, 00:20:06.777 "qid": 0, 00:20:06.777 "state": "enabled", 00:20:06.777 "thread": "nvmf_tgt_poll_group_000", 00:20:06.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:06.777 "listen_address": { 00:20:06.777 "trtype": "TCP", 00:20:06.777 "adrfam": "IPv4", 00:20:06.777 "traddr": "10.0.0.2", 00:20:06.777 "trsvcid": "4420" 00:20:06.777 }, 00:20:06.777 "peer_address": { 00:20:06.777 "trtype": "TCP", 00:20:06.777 "adrfam": "IPv4", 00:20:06.777 "traddr": "10.0.0.1", 00:20:06.777 "trsvcid": "37396" 00:20:06.777 }, 00:20:06.777 "auth": { 00:20:06.777 "state": "completed", 00:20:06.777 "digest": "sha256", 00:20:06.777 "dhgroup": "ffdhe3072" 00:20:06.777 } 00:20:06.777 } 00:20:06.777 ]' 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.777 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.778 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.036 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:07.036 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:07.602 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.602 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:07.602 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.602 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.602 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.602 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.602 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.602 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.860 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.134 00:20:08.134 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.134 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.134 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.500 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.500 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.500 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.500 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.500 { 00:20:08.500 "cntlid": 21, 00:20:08.500 "qid": 0, 00:20:08.500 "state": "enabled", 00:20:08.500 "thread": "nvmf_tgt_poll_group_000", 00:20:08.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.500 "listen_address": { 00:20:08.500 "trtype": "TCP", 00:20:08.500 "adrfam": "IPv4", 00:20:08.500 "traddr": "10.0.0.2", 00:20:08.500 "trsvcid": "4420" 00:20:08.500 }, 00:20:08.500 "peer_address": { 00:20:08.500 "trtype": "TCP", 00:20:08.500 "adrfam": "IPv4", 00:20:08.500 "traddr": "10.0.0.1", 00:20:08.500 "trsvcid": "37440" 00:20:08.500 }, 00:20:08.500 "auth": { 00:20:08.500 "state": "completed", 00:20:08.500 "digest": "sha256", 00:20:08.500 "dhgroup": "ffdhe3072" 00:20:08.500 } 00:20:08.500 } 00:20:08.500 ]' 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.500 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.758 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:08.758 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:09.324 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.324 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.324 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.324 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.324 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.324 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.324 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.324 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.324 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.582 00:20:09.582 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.582 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.582 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.840 { 00:20:09.840 "cntlid": 23, 00:20:09.840 "qid": 0, 00:20:09.840 "state": "enabled", 00:20:09.840 "thread": "nvmf_tgt_poll_group_000", 00:20:09.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.840 "listen_address": { 00:20:09.840 "trtype": "TCP", 00:20:09.840 "adrfam": "IPv4", 00:20:09.840 "traddr": "10.0.0.2", 00:20:09.840 "trsvcid": "4420" 00:20:09.840 }, 00:20:09.840 "peer_address": { 00:20:09.840 "trtype": "TCP", 00:20:09.840 "adrfam": "IPv4", 00:20:09.840 "traddr": "10.0.0.1", 00:20:09.840 "trsvcid": "37476" 00:20:09.840 }, 00:20:09.840 "auth": { 00:20:09.840 "state": "completed", 00:20:09.840 "digest": "sha256", 00:20:09.840 "dhgroup": "ffdhe3072" 00:20:09.840 } 00:20:09.840 } 00:20:09.840 ]' 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.840 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.099 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.099 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.099 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.099 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.099 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.099 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:10.099 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:10.665 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.665 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.665 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.924 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.183 00:20:11.183 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.183 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.183 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.441 { 00:20:11.441 "cntlid": 25, 00:20:11.441 "qid": 0, 00:20:11.441 "state": "enabled", 00:20:11.441 "thread": "nvmf_tgt_poll_group_000", 00:20:11.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.441 "listen_address": { 00:20:11.441 "trtype": "TCP", 00:20:11.441 "adrfam": "IPv4", 00:20:11.441 "traddr": "10.0.0.2", 00:20:11.441 "trsvcid": "4420" 00:20:11.441 }, 00:20:11.441 "peer_address": { 00:20:11.441 "trtype": "TCP", 00:20:11.441 "adrfam": "IPv4", 00:20:11.441 "traddr": "10.0.0.1", 00:20:11.441 "trsvcid": "56368" 00:20:11.441 }, 00:20:11.441 "auth": { 00:20:11.441 "state": "completed", 00:20:11.441 "digest": "sha256", 00:20:11.441 "dhgroup": "ffdhe4096" 00:20:11.441 } 00:20:11.441 } 00:20:11.441 ]' 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.441 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.700 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.700 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.700 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.700 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.700 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.700 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:11.700 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:12.268 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.268 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.268 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.268 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.268 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.268 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.268 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.268 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.527 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.785 00:20:12.785 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.785 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.785 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.044 { 00:20:13.044 "cntlid": 27, 00:20:13.044 "qid": 0, 00:20:13.044 "state": "enabled", 00:20:13.044 "thread": "nvmf_tgt_poll_group_000", 00:20:13.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.044 "listen_address": { 00:20:13.044 "trtype": "TCP", 00:20:13.044 "adrfam": "IPv4", 00:20:13.044 "traddr": "10.0.0.2", 00:20:13.044 "trsvcid": "4420" 00:20:13.044 }, 00:20:13.044 "peer_address": { 00:20:13.044 "trtype": "TCP", 00:20:13.044 "adrfam": "IPv4", 00:20:13.044 "traddr": "10.0.0.1", 00:20:13.044 "trsvcid": "56398" 00:20:13.044 }, 00:20:13.044 "auth": { 00:20:13.044 "state": "completed", 00:20:13.044 "digest": "sha256", 00:20:13.044 "dhgroup": "ffdhe4096" 00:20:13.044 } 00:20:13.044 } 00:20:13.044 ]' 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.044 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.303 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.303 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.303 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.303 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:13.303 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:13.869 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.869 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.869 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.869 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.869 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.869 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.869 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.869 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.128 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.386 00:20:14.386 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.386 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.386 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.645 { 00:20:14.645 "cntlid": 29, 00:20:14.645 "qid": 0, 00:20:14.645 "state": "enabled", 00:20:14.645 "thread": "nvmf_tgt_poll_group_000", 00:20:14.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.645 "listen_address": { 00:20:14.645 "trtype": "TCP", 00:20:14.645 "adrfam": "IPv4", 00:20:14.645 "traddr": "10.0.0.2", 00:20:14.645 "trsvcid": "4420" 00:20:14.645 }, 00:20:14.645 "peer_address": { 00:20:14.645 "trtype": "TCP", 00:20:14.645 "adrfam": "IPv4", 00:20:14.645 "traddr": "10.0.0.1", 00:20:14.645 "trsvcid": "56420" 00:20:14.645 }, 00:20:14.645 "auth": { 00:20:14.645 "state": "completed", 00:20:14.645 "digest": "sha256", 00:20:14.645 "dhgroup": "ffdhe4096" 00:20:14.645 } 00:20:14.645 } 00:20:14.645 ]' 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.645 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.646 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.646 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.646 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.646 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.646 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.905 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:14.905 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:15.473 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.473 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.473 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.473 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.473 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.473 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.473 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.473 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.732 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.991 00:20:15.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.249 { 00:20:16.249 "cntlid": 31, 00:20:16.249 "qid": 0, 00:20:16.249 "state": "enabled", 00:20:16.249 "thread": "nvmf_tgt_poll_group_000", 00:20:16.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.249 "listen_address": { 00:20:16.249 "trtype": "TCP", 00:20:16.249 "adrfam": "IPv4", 00:20:16.249 "traddr": "10.0.0.2", 00:20:16.249 "trsvcid": "4420" 00:20:16.249 }, 00:20:16.249 "peer_address": { 00:20:16.249 "trtype": "TCP", 00:20:16.249 "adrfam": "IPv4", 00:20:16.249 "traddr": "10.0.0.1", 00:20:16.249 "trsvcid": "56452" 00:20:16.249 }, 00:20:16.249 "auth": { 00:20:16.249 "state": "completed", 00:20:16.249 "digest": "sha256", 00:20:16.249 "dhgroup": "ffdhe4096" 00:20:16.249 } 00:20:16.249 } 00:20:16.249 ]' 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.249 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.249 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.249 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.249 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.509 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:16.509 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.076 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.335 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.594 00:20:17.594 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.594 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.594 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.853 { 00:20:17.853 "cntlid": 33, 00:20:17.853 "qid": 0, 00:20:17.853 "state": "enabled", 00:20:17.853 "thread": "nvmf_tgt_poll_group_000", 00:20:17.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:17.853 "listen_address": { 00:20:17.853 "trtype": "TCP", 00:20:17.853 "adrfam": "IPv4", 00:20:17.853 "traddr": "10.0.0.2", 00:20:17.853 "trsvcid": "4420" 00:20:17.853 }, 00:20:17.853 "peer_address": { 00:20:17.853 "trtype": "TCP", 00:20:17.853 "adrfam": "IPv4", 00:20:17.853 "traddr": "10.0.0.1", 00:20:17.853 "trsvcid": "56470" 00:20:17.853 }, 00:20:17.853 "auth": { 00:20:17.853 "state": "completed", 00:20:17.853 "digest": "sha256", 00:20:17.853 "dhgroup": "ffdhe6144" 00:20:17.853 } 00:20:17.853 } 00:20:17.853 ]' 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.853 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.112 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:18.112 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:18.679 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.679 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.679 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.679 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.679 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.679 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.679 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.679 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.937 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.195 00:20:19.195 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.195 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.195 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.454 { 00:20:19.454 "cntlid": 35, 00:20:19.454 "qid": 0, 00:20:19.454 "state": "enabled", 00:20:19.454 "thread": "nvmf_tgt_poll_group_000", 00:20:19.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.454 "listen_address": { 00:20:19.454 "trtype": "TCP", 00:20:19.454 "adrfam": "IPv4", 00:20:19.454 "traddr": "10.0.0.2", 00:20:19.454 "trsvcid": "4420" 00:20:19.454 }, 00:20:19.454 "peer_address": { 00:20:19.454 "trtype": "TCP", 00:20:19.454 "adrfam": "IPv4", 00:20:19.454 "traddr": "10.0.0.1", 00:20:19.454 "trsvcid": "56492" 00:20:19.454 }, 00:20:19.454 "auth": { 00:20:19.454 "state": "completed", 00:20:19.454 "digest": "sha256", 00:20:19.454 "dhgroup": "ffdhe6144" 00:20:19.454 } 00:20:19.454 } 00:20:19.454 ]' 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.454 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.713 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.713 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.713 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.713 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.713 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.713 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:19.713 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:20.280 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.280 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.280 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.280 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.280 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.280 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.280 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.280 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.538 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.797 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.055 { 00:20:21.055 "cntlid": 37, 00:20:21.055 "qid": 0, 00:20:21.055 "state": "enabled", 00:20:21.055 "thread": "nvmf_tgt_poll_group_000", 00:20:21.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.055 "listen_address": { 00:20:21.055 "trtype": "TCP", 00:20:21.055 "adrfam": "IPv4", 00:20:21.055 "traddr": "10.0.0.2", 00:20:21.055 "trsvcid": "4420" 00:20:21.055 }, 00:20:21.055 "peer_address": { 00:20:21.055 "trtype": "TCP", 00:20:21.055 "adrfam": "IPv4", 00:20:21.055 "traddr": "10.0.0.1", 00:20:21.055 "trsvcid": "37022" 00:20:21.055 }, 00:20:21.055 "auth": { 00:20:21.055 "state": "completed", 00:20:21.055 "digest": "sha256", 00:20:21.055 "dhgroup": "ffdhe6144" 00:20:21.055 } 00:20:21.055 } 00:20:21.055 ]' 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.055 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.313 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.313 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.313 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.313 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.313 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.572 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:21.572 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.138 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.704 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.704 { 00:20:22.704 "cntlid": 39, 00:20:22.704 "qid": 0, 00:20:22.704 "state": "enabled", 00:20:22.704 "thread": "nvmf_tgt_poll_group_000", 00:20:22.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:22.704 "listen_address": { 00:20:22.704 "trtype": "TCP", 00:20:22.704 "adrfam": "IPv4", 00:20:22.704 "traddr": "10.0.0.2", 00:20:22.704 "trsvcid": "4420" 00:20:22.704 }, 00:20:22.704 "peer_address": { 00:20:22.704 "trtype": "TCP", 00:20:22.704 "adrfam": "IPv4", 00:20:22.704 "traddr": "10.0.0.1", 00:20:22.704 "trsvcid": "37052" 00:20:22.704 }, 00:20:22.704 "auth": { 00:20:22.704 "state": "completed", 00:20:22.704 "digest": "sha256", 00:20:22.704 "dhgroup": "ffdhe6144" 00:20:22.704 } 00:20:22.704 } 00:20:22.704 ]' 00:20:22.704 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.962 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.962 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.962 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.962 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.962 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.962 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.962 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.221 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:23.221 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.788 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.047 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.306 00:20:24.306 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.306 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.306 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.567 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.567 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.567 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.567 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.567 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.567 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.567 { 00:20:24.567 "cntlid": 41, 00:20:24.567 "qid": 0, 00:20:24.567 "state": "enabled", 00:20:24.567 "thread": "nvmf_tgt_poll_group_000", 00:20:24.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.567 "listen_address": { 00:20:24.567 "trtype": "TCP", 00:20:24.567 "adrfam": "IPv4", 00:20:24.567 "traddr": "10.0.0.2", 00:20:24.567 "trsvcid": "4420" 00:20:24.567 }, 00:20:24.567 "peer_address": { 00:20:24.567 "trtype": "TCP", 00:20:24.567 "adrfam": "IPv4", 00:20:24.567 "traddr": "10.0.0.1", 00:20:24.567 "trsvcid": "37074" 00:20:24.567 }, 00:20:24.567 "auth": { 00:20:24.567 "state": "completed", 00:20:24.567 "digest": "sha256", 00:20:24.567 "dhgroup": "ffdhe8192" 00:20:24.567 } 00:20:24.567 } 00:20:24.567 ]' 00:20:24.567 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.567 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.568 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.827 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.827 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.827 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.827 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.827 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.827 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:24.827 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:25.395 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.395 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.395 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.677 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.245 00:20:26.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.504 { 00:20:26.504 "cntlid": 43, 00:20:26.504 "qid": 0, 00:20:26.504 "state": "enabled", 00:20:26.504 "thread": "nvmf_tgt_poll_group_000", 00:20:26.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.504 "listen_address": { 00:20:26.504 "trtype": "TCP", 00:20:26.504 "adrfam": "IPv4", 00:20:26.504 "traddr": "10.0.0.2", 00:20:26.504 "trsvcid": "4420" 00:20:26.504 }, 00:20:26.504 "peer_address": { 00:20:26.504 "trtype": "TCP", 00:20:26.504 "adrfam": "IPv4", 00:20:26.504 "traddr": "10.0.0.1", 00:20:26.504 "trsvcid": "37092" 00:20:26.504 }, 00:20:26.504 "auth": { 00:20:26.504 "state": "completed", 00:20:26.504 "digest": "sha256", 00:20:26.504 "dhgroup": "ffdhe8192" 00:20:26.504 } 00:20:26.504 } 00:20:26.504 ]' 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.504 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.762 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:26.762 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:27.328 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.328 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.328 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.328 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.328 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.328 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.328 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.328 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.588 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.155 00:20:28.155 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.155 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.156 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.156 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.156 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.156 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.156 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.156 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.156 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.156 { 00:20:28.156 "cntlid": 45, 00:20:28.156 "qid": 0, 00:20:28.156 "state": "enabled", 00:20:28.156 "thread": "nvmf_tgt_poll_group_000", 00:20:28.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.156 "listen_address": { 00:20:28.156 "trtype": "TCP", 00:20:28.156 "adrfam": "IPv4", 00:20:28.156 "traddr": "10.0.0.2", 00:20:28.156 "trsvcid": "4420" 00:20:28.156 }, 00:20:28.156 "peer_address": { 00:20:28.156 "trtype": "TCP", 00:20:28.156 "adrfam": "IPv4", 00:20:28.156 "traddr": "10.0.0.1", 00:20:28.156 "trsvcid": "37122" 00:20:28.156 }, 00:20:28.156 "auth": { 00:20:28.156 "state": "completed", 00:20:28.156 "digest": "sha256", 00:20:28.156 "dhgroup": "ffdhe8192" 00:20:28.156 } 00:20:28.156 } 00:20:28.156 ]' 00:20:28.156 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.414 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.414 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.414 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.414 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.414 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.414 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.414 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.673 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:28.673 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:29.240 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.240 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.240 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.240 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.240 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.240 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.240 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.240 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.240 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.499 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.758 00:20:29.758 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.758 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.758 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.017 { 00:20:30.017 "cntlid": 47, 00:20:30.017 "qid": 0, 00:20:30.017 "state": "enabled", 00:20:30.017 "thread": "nvmf_tgt_poll_group_000", 00:20:30.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.017 "listen_address": { 00:20:30.017 "trtype": "TCP", 00:20:30.017 "adrfam": "IPv4", 00:20:30.017 "traddr": "10.0.0.2", 00:20:30.017 "trsvcid": "4420" 00:20:30.017 }, 00:20:30.017 "peer_address": { 00:20:30.017 "trtype": "TCP", 00:20:30.017 "adrfam": "IPv4", 00:20:30.017 "traddr": "10.0.0.1", 00:20:30.017 "trsvcid": "37154" 00:20:30.017 }, 00:20:30.017 "auth": { 00:20:30.017 "state": "completed", 00:20:30.017 "digest": "sha256", 00:20:30.017 "dhgroup": "ffdhe8192" 00:20:30.017 } 00:20:30.017 } 00:20:30.017 ]' 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.017 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.276 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.276 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.276 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.276 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:30.276 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:30.844 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.844 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.844 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.844 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.103 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.362 00:20:31.362 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.362 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.362 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.621 { 00:20:31.621 "cntlid": 49, 00:20:31.621 "qid": 0, 00:20:31.621 "state": "enabled", 00:20:31.621 "thread": "nvmf_tgt_poll_group_000", 00:20:31.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.621 "listen_address": { 00:20:31.621 "trtype": "TCP", 00:20:31.621 "adrfam": "IPv4", 00:20:31.621 "traddr": "10.0.0.2", 00:20:31.621 "trsvcid": "4420" 00:20:31.621 }, 00:20:31.621 "peer_address": { 00:20:31.621 "trtype": "TCP", 00:20:31.621 "adrfam": "IPv4", 00:20:31.621 "traddr": "10.0.0.1", 00:20:31.621 "trsvcid": "50674" 00:20:31.621 }, 00:20:31.621 "auth": { 00:20:31.621 "state": "completed", 00:20:31.621 "digest": "sha384", 00:20:31.621 "dhgroup": "null" 00:20:31.621 } 00:20:31.621 } 00:20:31.621 ]' 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.621 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.880 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:31.880 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:32.447 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.447 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.447 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.447 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.447 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.447 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.447 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.447 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.706 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.965 00:20:32.965 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.965 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.965 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.225 { 00:20:33.225 "cntlid": 51, 00:20:33.225 "qid": 0, 00:20:33.225 "state": "enabled", 00:20:33.225 "thread": "nvmf_tgt_poll_group_000", 00:20:33.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.225 "listen_address": { 00:20:33.225 "trtype": "TCP", 00:20:33.225 "adrfam": "IPv4", 00:20:33.225 "traddr": "10.0.0.2", 00:20:33.225 "trsvcid": "4420" 00:20:33.225 }, 00:20:33.225 "peer_address": { 00:20:33.225 "trtype": "TCP", 00:20:33.225 "adrfam": "IPv4", 00:20:33.225 "traddr": "10.0.0.1", 00:20:33.225 "trsvcid": "50698" 00:20:33.225 }, 00:20:33.225 "auth": { 00:20:33.225 "state": "completed", 00:20:33.225 "digest": "sha384", 00:20:33.225 "dhgroup": "null" 00:20:33.225 } 00:20:33.225 } 00:20:33.225 ]' 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.225 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.484 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:33.484 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:34.051 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.051 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:34.051 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.051 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.051 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.051 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.051 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.051 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.309 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:34.309 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.310 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.569 00:20:34.569 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.569 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.569 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.827 { 00:20:34.827 "cntlid": 53, 00:20:34.827 "qid": 0, 00:20:34.827 "state": "enabled", 00:20:34.827 "thread": "nvmf_tgt_poll_group_000", 00:20:34.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.827 "listen_address": { 00:20:34.827 "trtype": "TCP", 00:20:34.827 "adrfam": "IPv4", 00:20:34.827 "traddr": "10.0.0.2", 00:20:34.827 "trsvcid": "4420" 00:20:34.827 }, 00:20:34.827 "peer_address": { 00:20:34.827 "trtype": "TCP", 00:20:34.827 "adrfam": "IPv4", 00:20:34.827 "traddr": "10.0.0.1", 00:20:34.827 "trsvcid": "50718" 00:20:34.827 }, 00:20:34.827 "auth": { 00:20:34.827 "state": "completed", 00:20:34.827 "digest": "sha384", 00:20:34.827 "dhgroup": "null" 00:20:34.827 } 00:20:34.827 } 00:20:34.827 ]' 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.827 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.085 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:35.085 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:35.652 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.652 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.652 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.652 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.652 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.652 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.653 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:35.653 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:35.911 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:35.911 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.911 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.911 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:35.911 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.911 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.911 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:35.912 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.912 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.912 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.912 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.912 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.912 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.171 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.171 { 00:20:36.171 "cntlid": 55, 00:20:36.171 "qid": 0, 00:20:36.171 "state": "enabled", 00:20:36.171 "thread": "nvmf_tgt_poll_group_000", 00:20:36.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.171 "listen_address": { 00:20:36.171 "trtype": "TCP", 00:20:36.171 "adrfam": "IPv4", 00:20:36.171 "traddr": "10.0.0.2", 00:20:36.171 "trsvcid": "4420" 00:20:36.171 }, 00:20:36.171 "peer_address": { 00:20:36.171 "trtype": "TCP", 00:20:36.171 "adrfam": "IPv4", 00:20:36.171 "traddr": "10.0.0.1", 00:20:36.171 "trsvcid": "50764" 00:20:36.171 }, 00:20:36.171 "auth": { 00:20:36.171 "state": "completed", 00:20:36.171 "digest": "sha384", 00:20:36.171 "dhgroup": "null" 00:20:36.171 } 00:20:36.171 } 00:20:36.171 ]' 00:20:36.171 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.430 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.430 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.430 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:36.430 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.430 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.430 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.430 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.688 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:36.688 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.256 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.256 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.515 00:20:37.515 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.515 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.515 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.773 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.773 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.773 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.773 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.773 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.773 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.773 { 00:20:37.773 "cntlid": 57, 00:20:37.773 "qid": 0, 00:20:37.773 "state": "enabled", 00:20:37.773 "thread": "nvmf_tgt_poll_group_000", 00:20:37.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.773 "listen_address": { 00:20:37.773 "trtype": "TCP", 00:20:37.773 "adrfam": "IPv4", 00:20:37.773 "traddr": "10.0.0.2", 00:20:37.773 "trsvcid": "4420" 00:20:37.773 }, 00:20:37.773 "peer_address": { 00:20:37.773 "trtype": "TCP", 00:20:37.773 "adrfam": "IPv4", 00:20:37.773 "traddr": "10.0.0.1", 00:20:37.773 "trsvcid": "50786" 00:20:37.773 }, 00:20:37.773 "auth": { 00:20:37.773 "state": "completed", 00:20:37.773 "digest": "sha384", 00:20:37.773 "dhgroup": "ffdhe2048" 00:20:37.773 } 00:20:37.774 } 00:20:37.774 ]' 00:20:37.774 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.774 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.774 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.774 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.774 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.032 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.032 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.032 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.032 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:38.032 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:38.600 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.600 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.600 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.600 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.600 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.600 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.600 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.600 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.859 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.117 00:20:39.117 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.117 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.117 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.376 { 00:20:39.376 "cntlid": 59, 00:20:39.376 "qid": 0, 00:20:39.376 "state": "enabled", 00:20:39.376 "thread": "nvmf_tgt_poll_group_000", 00:20:39.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.376 "listen_address": { 00:20:39.376 "trtype": "TCP", 00:20:39.376 "adrfam": "IPv4", 00:20:39.376 "traddr": "10.0.0.2", 00:20:39.376 "trsvcid": "4420" 00:20:39.376 }, 00:20:39.376 "peer_address": { 00:20:39.376 "trtype": "TCP", 00:20:39.376 "adrfam": "IPv4", 00:20:39.376 "traddr": "10.0.0.1", 00:20:39.376 "trsvcid": "50816" 00:20:39.376 }, 00:20:39.376 "auth": { 00:20:39.376 "state": "completed", 00:20:39.376 "digest": "sha384", 00:20:39.376 "dhgroup": "ffdhe2048" 00:20:39.376 } 00:20:39.376 } 00:20:39.376 ]' 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.376 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.377 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.635 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:39.635 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:40.202 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.202 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.202 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.202 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.202 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.203 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.203 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.203 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.461 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.720 00:20:40.720 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.720 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.720 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.979 { 00:20:40.979 "cntlid": 61, 00:20:40.979 "qid": 0, 00:20:40.979 "state": "enabled", 00:20:40.979 "thread": "nvmf_tgt_poll_group_000", 00:20:40.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.979 "listen_address": { 00:20:40.979 "trtype": "TCP", 00:20:40.979 "adrfam": "IPv4", 00:20:40.979 "traddr": "10.0.0.2", 00:20:40.979 "trsvcid": "4420" 00:20:40.979 }, 00:20:40.979 "peer_address": { 00:20:40.979 "trtype": "TCP", 00:20:40.979 "adrfam": "IPv4", 00:20:40.979 "traddr": "10.0.0.1", 00:20:40.979 "trsvcid": "38328" 00:20:40.979 }, 00:20:40.979 "auth": { 00:20:40.979 "state": "completed", 00:20:40.979 "digest": "sha384", 00:20:40.979 "dhgroup": "ffdhe2048" 00:20:40.979 } 00:20:40.979 } 00:20:40.979 ]' 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.979 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.238 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.238 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.238 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.238 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.238 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.238 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:41.238 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:41.805 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.805 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.805 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.805 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.805 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.805 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.805 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:41.805 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.064 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.323 00:20:42.323 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.323 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.323 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.582 { 00:20:42.582 "cntlid": 63, 00:20:42.582 "qid": 0, 00:20:42.582 "state": "enabled", 00:20:42.582 "thread": "nvmf_tgt_poll_group_000", 00:20:42.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.582 "listen_address": { 00:20:42.582 "trtype": "TCP", 00:20:42.582 "adrfam": "IPv4", 00:20:42.582 "traddr": "10.0.0.2", 00:20:42.582 "trsvcid": "4420" 00:20:42.582 }, 00:20:42.582 "peer_address": { 00:20:42.582 "trtype": "TCP", 00:20:42.582 "adrfam": "IPv4", 00:20:42.582 "traddr": "10.0.0.1", 00:20:42.582 "trsvcid": "38356" 00:20:42.582 }, 00:20:42.582 "auth": { 00:20:42.582 "state": "completed", 00:20:42.582 "digest": "sha384", 00:20:42.582 "dhgroup": "ffdhe2048" 00:20:42.582 } 00:20:42.582 } 00:20:42.582 ]' 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.582 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.841 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:42.841 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.409 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.668 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.926 00:20:43.926 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.926 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.926 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.184 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.185 { 00:20:44.185 "cntlid": 65, 00:20:44.185 "qid": 0, 00:20:44.185 "state": "enabled", 00:20:44.185 "thread": "nvmf_tgt_poll_group_000", 00:20:44.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.185 "listen_address": { 00:20:44.185 "trtype": "TCP", 00:20:44.185 "adrfam": "IPv4", 00:20:44.185 "traddr": "10.0.0.2", 00:20:44.185 "trsvcid": "4420" 00:20:44.185 }, 00:20:44.185 "peer_address": { 00:20:44.185 "trtype": "TCP", 00:20:44.185 "adrfam": "IPv4", 00:20:44.185 "traddr": "10.0.0.1", 00:20:44.185 "trsvcid": "38386" 00:20:44.185 }, 00:20:44.185 "auth": { 00:20:44.185 "state": "completed", 00:20:44.185 "digest": "sha384", 00:20:44.185 "dhgroup": "ffdhe3072" 00:20:44.185 } 00:20:44.185 } 00:20:44.185 ]' 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.185 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.443 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:44.443 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:45.010 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.010 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.010 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.010 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.010 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.010 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.010 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.010 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.269 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.528 00:20:45.528 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.528 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.528 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.786 { 00:20:45.786 "cntlid": 67, 00:20:45.786 "qid": 0, 00:20:45.786 "state": "enabled", 00:20:45.786 "thread": "nvmf_tgt_poll_group_000", 00:20:45.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.786 "listen_address": { 00:20:45.786 "trtype": "TCP", 00:20:45.786 "adrfam": "IPv4", 00:20:45.786 "traddr": "10.0.0.2", 00:20:45.786 "trsvcid": "4420" 00:20:45.786 }, 00:20:45.786 "peer_address": { 00:20:45.786 "trtype": "TCP", 00:20:45.786 "adrfam": "IPv4", 00:20:45.786 "traddr": "10.0.0.1", 00:20:45.786 "trsvcid": "38420" 00:20:45.786 }, 00:20:45.786 "auth": { 00:20:45.786 "state": "completed", 00:20:45.786 "digest": "sha384", 00:20:45.786 "dhgroup": "ffdhe3072" 00:20:45.786 } 00:20:45.786 } 00:20:45.786 ]' 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.786 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.120 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:46.120 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:46.711 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.711 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.711 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.711 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.711 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.711 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.711 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.711 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.970 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.229 00:20:47.229 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.229 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.229 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.229 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.229 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.229 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.229 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.229 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.229 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.229 { 00:20:47.229 "cntlid": 69, 00:20:47.229 "qid": 0, 00:20:47.229 "state": "enabled", 00:20:47.229 "thread": "nvmf_tgt_poll_group_000", 00:20:47.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.229 "listen_address": { 00:20:47.229 "trtype": "TCP", 00:20:47.229 "adrfam": "IPv4", 00:20:47.229 "traddr": "10.0.0.2", 00:20:47.229 "trsvcid": "4420" 00:20:47.229 }, 00:20:47.229 "peer_address": { 00:20:47.229 "trtype": "TCP", 00:20:47.229 "adrfam": "IPv4", 00:20:47.229 "traddr": "10.0.0.1", 00:20:47.229 "trsvcid": "38456" 00:20:47.229 }, 00:20:47.229 "auth": { 00:20:47.229 "state": "completed", 00:20:47.229 "digest": "sha384", 00:20:47.229 "dhgroup": "ffdhe3072" 00:20:47.229 } 00:20:47.229 } 00:20:47.229 ]' 00:20:47.229 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.488 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.488 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.488 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.488 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.488 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.488 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.489 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.747 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:47.747 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:48.314 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.314 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.314 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.314 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.314 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.314 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.314 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.314 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.314 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.572 00:20:48.572 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.572 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.572 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.830 { 00:20:48.830 "cntlid": 71, 00:20:48.830 "qid": 0, 00:20:48.830 "state": "enabled", 00:20:48.830 "thread": "nvmf_tgt_poll_group_000", 00:20:48.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:48.830 "listen_address": { 00:20:48.830 "trtype": "TCP", 00:20:48.830 "adrfam": "IPv4", 00:20:48.830 "traddr": "10.0.0.2", 00:20:48.830 "trsvcid": "4420" 00:20:48.830 }, 00:20:48.830 "peer_address": { 00:20:48.830 "trtype": "TCP", 00:20:48.830 "adrfam": "IPv4", 00:20:48.830 "traddr": "10.0.0.1", 00:20:48.830 "trsvcid": "38478" 00:20:48.830 }, 00:20:48.830 "auth": { 00:20:48.830 "state": "completed", 00:20:48.830 "digest": "sha384", 00:20:48.830 "dhgroup": "ffdhe3072" 00:20:48.830 } 00:20:48.830 } 00:20:48.830 ]' 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.830 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.089 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.089 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.089 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.089 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.089 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.089 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:49.089 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:49.656 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.656 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.656 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.656 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.915 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.175 00:20:50.175 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.175 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.175 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.434 { 00:20:50.434 "cntlid": 73, 00:20:50.434 "qid": 0, 00:20:50.434 "state": "enabled", 00:20:50.434 "thread": "nvmf_tgt_poll_group_000", 00:20:50.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.434 "listen_address": { 00:20:50.434 "trtype": "TCP", 00:20:50.434 "adrfam": "IPv4", 00:20:50.434 "traddr": "10.0.0.2", 00:20:50.434 "trsvcid": "4420" 00:20:50.434 }, 00:20:50.434 "peer_address": { 00:20:50.434 "trtype": "TCP", 00:20:50.434 "adrfam": "IPv4", 00:20:50.434 "traddr": "10.0.0.1", 00:20:50.434 "trsvcid": "45074" 00:20:50.434 }, 00:20:50.434 "auth": { 00:20:50.434 "state": "completed", 00:20:50.434 "digest": "sha384", 00:20:50.434 "dhgroup": "ffdhe4096" 00:20:50.434 } 00:20:50.434 } 00:20:50.434 ]' 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.434 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.692 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.692 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.692 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.692 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.692 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.692 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:50.692 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:51.259 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.518 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.777 00:20:51.777 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.777 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.777 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.035 { 00:20:52.035 "cntlid": 75, 00:20:52.035 "qid": 0, 00:20:52.035 "state": "enabled", 00:20:52.035 "thread": "nvmf_tgt_poll_group_000", 00:20:52.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.035 "listen_address": { 00:20:52.035 "trtype": "TCP", 00:20:52.035 "adrfam": "IPv4", 00:20:52.035 "traddr": "10.0.0.2", 00:20:52.035 "trsvcid": "4420" 00:20:52.035 }, 00:20:52.035 "peer_address": { 00:20:52.035 "trtype": "TCP", 00:20:52.035 "adrfam": "IPv4", 00:20:52.035 "traddr": "10.0.0.1", 00:20:52.035 "trsvcid": "45098" 00:20:52.035 }, 00:20:52.035 "auth": { 00:20:52.035 "state": "completed", 00:20:52.035 "digest": "sha384", 00:20:52.035 "dhgroup": "ffdhe4096" 00:20:52.035 } 00:20:52.035 } 00:20:52.035 ]' 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.035 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.294 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.294 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.294 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.294 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.294 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.294 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:52.294 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:52.861 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.861 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.861 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.861 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.861 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.861 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.861 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.861 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.120 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.379 00:20:53.379 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.379 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.379 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.638 { 00:20:53.638 "cntlid": 77, 00:20:53.638 "qid": 0, 00:20:53.638 "state": "enabled", 00:20:53.638 "thread": "nvmf_tgt_poll_group_000", 00:20:53.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.638 "listen_address": { 00:20:53.638 "trtype": "TCP", 00:20:53.638 "adrfam": "IPv4", 00:20:53.638 "traddr": "10.0.0.2", 00:20:53.638 "trsvcid": "4420" 00:20:53.638 }, 00:20:53.638 "peer_address": { 00:20:53.638 "trtype": "TCP", 00:20:53.638 "adrfam": "IPv4", 00:20:53.638 "traddr": "10.0.0.1", 00:20:53.638 "trsvcid": "45134" 00:20:53.638 }, 00:20:53.638 "auth": { 00:20:53.638 "state": "completed", 00:20:53.638 "digest": "sha384", 00:20:53.638 "dhgroup": "ffdhe4096" 00:20:53.638 } 00:20:53.638 } 00:20:53.638 ]' 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.638 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.897 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.897 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.897 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.897 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:53.897 12:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:20:54.464 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.464 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.464 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.464 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.464 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.464 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.464 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.464 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.722 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.981 00:20:54.981 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.981 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.981 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.240 { 00:20:55.240 "cntlid": 79, 00:20:55.240 "qid": 0, 00:20:55.240 "state": "enabled", 00:20:55.240 "thread": "nvmf_tgt_poll_group_000", 00:20:55.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.240 "listen_address": { 00:20:55.240 "trtype": "TCP", 00:20:55.240 "adrfam": "IPv4", 00:20:55.240 "traddr": "10.0.0.2", 00:20:55.240 "trsvcid": "4420" 00:20:55.240 }, 00:20:55.240 "peer_address": { 00:20:55.240 "trtype": "TCP", 00:20:55.240 "adrfam": "IPv4", 00:20:55.240 "traddr": "10.0.0.1", 00:20:55.240 "trsvcid": "45152" 00:20:55.240 }, 00:20:55.240 "auth": { 00:20:55.240 "state": "completed", 00:20:55.240 "digest": "sha384", 00:20:55.240 "dhgroup": "ffdhe4096" 00:20:55.240 } 00:20:55.240 } 00:20:55.240 ]' 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.240 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.240 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.240 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.240 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.240 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.240 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.499 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:55.499 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.066 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.324 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.582 00:20:56.582 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.582 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.582 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.841 { 00:20:56.841 "cntlid": 81, 00:20:56.841 "qid": 0, 00:20:56.841 "state": "enabled", 00:20:56.841 "thread": "nvmf_tgt_poll_group_000", 00:20:56.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.841 "listen_address": { 00:20:56.841 "trtype": "TCP", 00:20:56.841 "adrfam": "IPv4", 00:20:56.841 "traddr": "10.0.0.2", 00:20:56.841 "trsvcid": "4420" 00:20:56.841 }, 00:20:56.841 "peer_address": { 00:20:56.841 "trtype": "TCP", 00:20:56.841 "adrfam": "IPv4", 00:20:56.841 "traddr": "10.0.0.1", 00:20:56.841 "trsvcid": "45188" 00:20:56.841 }, 00:20:56.841 "auth": { 00:20:56.841 "state": "completed", 00:20:56.841 "digest": "sha384", 00:20:56.841 "dhgroup": "ffdhe6144" 00:20:56.841 } 00:20:56.841 } 00:20:56.841 ]' 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.841 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.100 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.100 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.100 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.100 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.100 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.358 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:57.358 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:20:57.924 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.925 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.491 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.491 { 00:20:58.491 "cntlid": 83, 00:20:58.491 "qid": 0, 00:20:58.491 "state": "enabled", 00:20:58.491 "thread": "nvmf_tgt_poll_group_000", 00:20:58.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.491 "listen_address": { 00:20:58.491 "trtype": "TCP", 00:20:58.491 "adrfam": "IPv4", 00:20:58.491 "traddr": "10.0.0.2", 00:20:58.491 "trsvcid": "4420" 00:20:58.491 }, 00:20:58.491 "peer_address": { 00:20:58.491 "trtype": "TCP", 00:20:58.491 "adrfam": "IPv4", 00:20:58.491 "traddr": "10.0.0.1", 00:20:58.491 "trsvcid": "45206" 00:20:58.491 }, 00:20:58.491 "auth": { 00:20:58.491 "state": "completed", 00:20:58.491 "digest": "sha384", 00:20:58.491 "dhgroup": "ffdhe6144" 00:20:58.491 } 00:20:58.491 } 00:20:58.491 ]' 00:20:58.491 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.750 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.750 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.750 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.750 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.750 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.750 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.750 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.009 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:59.009 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:20:59.576 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.576 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.576 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.576 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.576 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.576 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.576 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.576 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.834 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:59.834 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.834 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.835 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.093 00:21:00.093 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.093 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.093 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.352 { 00:21:00.352 "cntlid": 85, 00:21:00.352 "qid": 0, 00:21:00.352 "state": "enabled", 00:21:00.352 "thread": "nvmf_tgt_poll_group_000", 00:21:00.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.352 "listen_address": { 00:21:00.352 "trtype": "TCP", 00:21:00.352 "adrfam": "IPv4", 00:21:00.352 "traddr": "10.0.0.2", 00:21:00.352 "trsvcid": "4420" 00:21:00.352 }, 00:21:00.352 "peer_address": { 00:21:00.352 "trtype": "TCP", 00:21:00.352 "adrfam": "IPv4", 00:21:00.352 "traddr": "10.0.0.1", 00:21:00.352 "trsvcid": "45230" 00:21:00.352 }, 00:21:00.352 "auth": { 00:21:00.352 "state": "completed", 00:21:00.352 "digest": "sha384", 00:21:00.352 "dhgroup": "ffdhe6144" 00:21:00.352 } 00:21:00.352 } 00:21:00.352 ]' 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.352 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.611 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:00.611 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:01.179 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.179 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.179 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.179 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.179 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.179 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.179 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.179 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.438 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.697 00:21:01.697 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.697 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.697 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.956 { 00:21:01.956 "cntlid": 87, 00:21:01.956 "qid": 0, 00:21:01.956 "state": "enabled", 00:21:01.956 "thread": "nvmf_tgt_poll_group_000", 00:21:01.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:01.956 "listen_address": { 00:21:01.956 "trtype": "TCP", 00:21:01.956 "adrfam": "IPv4", 00:21:01.956 "traddr": "10.0.0.2", 00:21:01.956 "trsvcid": "4420" 00:21:01.956 }, 00:21:01.956 "peer_address": { 00:21:01.956 "trtype": "TCP", 00:21:01.956 "adrfam": "IPv4", 00:21:01.956 "traddr": "10.0.0.1", 00:21:01.956 "trsvcid": "34538" 00:21:01.956 }, 00:21:01.956 "auth": { 00:21:01.956 "state": "completed", 00:21:01.956 "digest": "sha384", 00:21:01.956 "dhgroup": "ffdhe6144" 00:21:01.956 } 00:21:01.956 } 00:21:01.956 ]' 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.956 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.215 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:02.215 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.783 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.042 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.609 00:21:03.609 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.609 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.610 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.610 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.610 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.610 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.610 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.610 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.610 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.610 { 00:21:03.610 "cntlid": 89, 00:21:03.610 "qid": 0, 00:21:03.610 "state": "enabled", 00:21:03.610 "thread": "nvmf_tgt_poll_group_000", 00:21:03.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.610 "listen_address": { 00:21:03.610 "trtype": "TCP", 00:21:03.610 "adrfam": "IPv4", 00:21:03.610 "traddr": "10.0.0.2", 00:21:03.610 "trsvcid": "4420" 00:21:03.610 }, 00:21:03.610 "peer_address": { 00:21:03.610 "trtype": "TCP", 00:21:03.610 "adrfam": "IPv4", 00:21:03.610 "traddr": "10.0.0.1", 00:21:03.610 "trsvcid": "34564" 00:21:03.610 }, 00:21:03.610 "auth": { 00:21:03.610 "state": "completed", 00:21:03.610 "digest": "sha384", 00:21:03.610 "dhgroup": "ffdhe8192" 00:21:03.610 } 00:21:03.610 } 00:21:03.610 ]' 00:21:03.610 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.868 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.868 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.869 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.869 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.869 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.869 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.869 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.127 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:04.127 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.695 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.953 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.953 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.953 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.953 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.953 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.953 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.212 00:21:05.212 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.212 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.212 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.471 { 00:21:05.471 "cntlid": 91, 00:21:05.471 "qid": 0, 00:21:05.471 "state": "enabled", 00:21:05.471 "thread": "nvmf_tgt_poll_group_000", 00:21:05.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.471 "listen_address": { 00:21:05.471 "trtype": "TCP", 00:21:05.471 "adrfam": "IPv4", 00:21:05.471 "traddr": "10.0.0.2", 00:21:05.471 "trsvcid": "4420" 00:21:05.471 }, 00:21:05.471 "peer_address": { 00:21:05.471 "trtype": "TCP", 00:21:05.471 "adrfam": "IPv4", 00:21:05.471 "traddr": "10.0.0.1", 00:21:05.471 "trsvcid": "34596" 00:21:05.471 }, 00:21:05.471 "auth": { 00:21:05.471 "state": "completed", 00:21:05.471 "digest": "sha384", 00:21:05.471 "dhgroup": "ffdhe8192" 00:21:05.471 } 00:21:05.471 } 00:21:05.471 ]' 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.471 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.730 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.730 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.730 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.730 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.730 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:05.730 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:06.298 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.298 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.298 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.298 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.298 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.298 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.298 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.298 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.557 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.124 00:21:07.124 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.124 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.124 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.383 { 00:21:07.383 "cntlid": 93, 00:21:07.383 "qid": 0, 00:21:07.383 "state": "enabled", 00:21:07.383 "thread": "nvmf_tgt_poll_group_000", 00:21:07.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.383 "listen_address": { 00:21:07.383 "trtype": "TCP", 00:21:07.383 "adrfam": "IPv4", 00:21:07.383 "traddr": "10.0.0.2", 00:21:07.383 "trsvcid": "4420" 00:21:07.383 }, 00:21:07.383 "peer_address": { 00:21:07.383 "trtype": "TCP", 00:21:07.383 "adrfam": "IPv4", 00:21:07.383 "traddr": "10.0.0.1", 00:21:07.383 "trsvcid": "34620" 00:21:07.383 }, 00:21:07.383 "auth": { 00:21:07.383 "state": "completed", 00:21:07.383 "digest": "sha384", 00:21:07.383 "dhgroup": "ffdhe8192" 00:21:07.383 } 00:21:07.383 } 00:21:07.383 ]' 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.383 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.642 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:07.642 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:08.210 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.210 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.210 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.210 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.210 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.210 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.210 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.210 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.468 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.036 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.036 { 00:21:09.036 "cntlid": 95, 00:21:09.036 "qid": 0, 00:21:09.036 "state": "enabled", 00:21:09.036 "thread": "nvmf_tgt_poll_group_000", 00:21:09.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:09.036 "listen_address": { 00:21:09.036 "trtype": "TCP", 00:21:09.036 "adrfam": "IPv4", 00:21:09.036 "traddr": "10.0.0.2", 00:21:09.036 "trsvcid": "4420" 00:21:09.036 }, 00:21:09.036 "peer_address": { 00:21:09.036 "trtype": "TCP", 00:21:09.036 "adrfam": "IPv4", 00:21:09.036 "traddr": "10.0.0.1", 00:21:09.036 "trsvcid": "34638" 00:21:09.036 }, 00:21:09.036 "auth": { 00:21:09.036 "state": "completed", 00:21:09.036 "digest": "sha384", 00:21:09.036 "dhgroup": "ffdhe8192" 00:21:09.036 } 00:21:09.036 } 00:21:09.036 ]' 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.036 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.295 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.295 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.295 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.295 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.295 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.553 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:09.553 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.121 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.380 00:21:10.380 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.380 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.380 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.639 { 00:21:10.639 "cntlid": 97, 00:21:10.639 "qid": 0, 00:21:10.639 "state": "enabled", 00:21:10.639 "thread": "nvmf_tgt_poll_group_000", 00:21:10.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.639 "listen_address": { 00:21:10.639 "trtype": "TCP", 00:21:10.639 "adrfam": "IPv4", 00:21:10.639 "traddr": "10.0.0.2", 00:21:10.639 "trsvcid": "4420" 00:21:10.639 }, 00:21:10.639 "peer_address": { 00:21:10.639 "trtype": "TCP", 00:21:10.639 "adrfam": "IPv4", 00:21:10.639 "traddr": "10.0.0.1", 00:21:10.639 "trsvcid": "41798" 00:21:10.639 }, 00:21:10.639 "auth": { 00:21:10.639 "state": "completed", 00:21:10.639 "digest": "sha512", 00:21:10.639 "dhgroup": "null" 00:21:10.639 } 00:21:10.639 } 00:21:10.639 ]' 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.639 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.898 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.898 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.898 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.898 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:10.898 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:11.465 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.465 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.465 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.465 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.465 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.465 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.465 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.465 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.724 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.725 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.725 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.725 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.984 00:21:11.984 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.984 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.984 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.242 { 00:21:12.242 "cntlid": 99, 00:21:12.242 "qid": 0, 00:21:12.242 "state": "enabled", 00:21:12.242 "thread": "nvmf_tgt_poll_group_000", 00:21:12.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:12.242 "listen_address": { 00:21:12.242 "trtype": "TCP", 00:21:12.242 "adrfam": "IPv4", 00:21:12.242 "traddr": "10.0.0.2", 00:21:12.242 "trsvcid": "4420" 00:21:12.242 }, 00:21:12.242 "peer_address": { 00:21:12.242 "trtype": "TCP", 00:21:12.242 "adrfam": "IPv4", 00:21:12.242 "traddr": "10.0.0.1", 00:21:12.242 "trsvcid": "41836" 00:21:12.242 }, 00:21:12.242 "auth": { 00:21:12.242 "state": "completed", 00:21:12.242 "digest": "sha512", 00:21:12.242 "dhgroup": "null" 00:21:12.242 } 00:21:12.242 } 00:21:12.242 ]' 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.242 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.242 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:12.242 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.501 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.501 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.501 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.501 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:12.501 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:13.068 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.068 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:13.068 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.068 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.068 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.068 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.068 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.068 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.327 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.586 00:21:13.586 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.586 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.586 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.844 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.844 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.844 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.844 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.844 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.844 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.844 { 00:21:13.844 "cntlid": 101, 00:21:13.844 "qid": 0, 00:21:13.844 "state": "enabled", 00:21:13.844 "thread": "nvmf_tgt_poll_group_000", 00:21:13.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.844 "listen_address": { 00:21:13.844 "trtype": "TCP", 00:21:13.845 "adrfam": "IPv4", 00:21:13.845 "traddr": "10.0.0.2", 00:21:13.845 "trsvcid": "4420" 00:21:13.845 }, 00:21:13.845 "peer_address": { 00:21:13.845 "trtype": "TCP", 00:21:13.845 "adrfam": "IPv4", 00:21:13.845 "traddr": "10.0.0.1", 00:21:13.845 "trsvcid": "41872" 00:21:13.845 }, 00:21:13.845 "auth": { 00:21:13.845 "state": "completed", 00:21:13.845 "digest": "sha512", 00:21:13.845 "dhgroup": "null" 00:21:13.845 } 00:21:13.845 } 00:21:13.845 ]' 00:21:13.845 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.845 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.845 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.845 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.845 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.845 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.845 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.845 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.103 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:14.104 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:14.671 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.671 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.671 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.671 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.671 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.671 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.671 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.671 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.930 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.189 00:21:15.189 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.189 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.189 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.448 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.448 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.448 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.448 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.448 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.448 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.448 { 00:21:15.448 "cntlid": 103, 00:21:15.448 "qid": 0, 00:21:15.448 "state": "enabled", 00:21:15.448 "thread": "nvmf_tgt_poll_group_000", 00:21:15.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.448 "listen_address": { 00:21:15.448 "trtype": "TCP", 00:21:15.448 "adrfam": "IPv4", 00:21:15.448 "traddr": "10.0.0.2", 00:21:15.449 "trsvcid": "4420" 00:21:15.449 }, 00:21:15.449 "peer_address": { 00:21:15.449 "trtype": "TCP", 00:21:15.449 "adrfam": "IPv4", 00:21:15.449 "traddr": "10.0.0.1", 00:21:15.449 "trsvcid": "41900" 00:21:15.449 }, 00:21:15.449 "auth": { 00:21:15.449 "state": "completed", 00:21:15.449 "digest": "sha512", 00:21:15.449 "dhgroup": "null" 00:21:15.449 } 00:21:15.449 } 00:21:15.449 ]' 00:21:15.449 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.449 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.449 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.449 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:15.449 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.449 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.449 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.449 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.707 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:15.707 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:16.274 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.274 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.274 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.274 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.274 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.274 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.274 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.274 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.274 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.532 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.790 00:21:16.790 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.790 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.790 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.047 { 00:21:17.047 "cntlid": 105, 00:21:17.047 "qid": 0, 00:21:17.047 "state": "enabled", 00:21:17.047 "thread": "nvmf_tgt_poll_group_000", 00:21:17.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.047 "listen_address": { 00:21:17.047 "trtype": "TCP", 00:21:17.047 "adrfam": "IPv4", 00:21:17.047 "traddr": "10.0.0.2", 00:21:17.047 "trsvcid": "4420" 00:21:17.047 }, 00:21:17.047 "peer_address": { 00:21:17.047 "trtype": "TCP", 00:21:17.047 "adrfam": "IPv4", 00:21:17.047 "traddr": "10.0.0.1", 00:21:17.047 "trsvcid": "41930" 00:21:17.047 }, 00:21:17.047 "auth": { 00:21:17.047 "state": "completed", 00:21:17.047 "digest": "sha512", 00:21:17.047 "dhgroup": "ffdhe2048" 00:21:17.047 } 00:21:17.047 } 00:21:17.047 ]' 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.047 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.305 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:17.305 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:17.871 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.871 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.871 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.871 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.871 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.871 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.871 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.871 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.129 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.387 00:21:18.387 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.387 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.387 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.644 { 00:21:18.644 "cntlid": 107, 00:21:18.644 "qid": 0, 00:21:18.644 "state": "enabled", 00:21:18.644 "thread": "nvmf_tgt_poll_group_000", 00:21:18.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:18.644 "listen_address": { 00:21:18.644 "trtype": "TCP", 00:21:18.644 "adrfam": "IPv4", 00:21:18.644 "traddr": "10.0.0.2", 00:21:18.644 "trsvcid": "4420" 00:21:18.644 }, 00:21:18.644 "peer_address": { 00:21:18.644 "trtype": "TCP", 00:21:18.644 "adrfam": "IPv4", 00:21:18.644 "traddr": "10.0.0.1", 00:21:18.644 "trsvcid": "41952" 00:21:18.644 }, 00:21:18.644 "auth": { 00:21:18.644 "state": "completed", 00:21:18.644 "digest": "sha512", 00:21:18.644 "dhgroup": "ffdhe2048" 00:21:18.644 } 00:21:18.644 } 00:21:18.644 ]' 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.644 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.902 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:18.902 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:19.468 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.468 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.468 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.468 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.468 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.468 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.468 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.468 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.725 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.725 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.983 { 00:21:19.983 "cntlid": 109, 00:21:19.983 "qid": 0, 00:21:19.983 "state": "enabled", 00:21:19.983 "thread": "nvmf_tgt_poll_group_000", 00:21:19.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.983 "listen_address": { 00:21:19.983 "trtype": "TCP", 00:21:19.983 "adrfam": "IPv4", 00:21:19.983 "traddr": "10.0.0.2", 00:21:19.983 "trsvcid": "4420" 00:21:19.983 }, 00:21:19.983 "peer_address": { 00:21:19.983 "trtype": "TCP", 00:21:19.983 "adrfam": "IPv4", 00:21:19.983 "traddr": "10.0.0.1", 00:21:19.983 "trsvcid": "41966" 00:21:19.983 }, 00:21:19.983 "auth": { 00:21:19.983 "state": "completed", 00:21:19.983 "digest": "sha512", 00:21:19.983 "dhgroup": "ffdhe2048" 00:21:19.983 } 00:21:19.983 } 00:21:19.983 ]' 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.983 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.241 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.241 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.241 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.241 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.241 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.241 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:20.241 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:20.810 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.810 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.810 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.810 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.810 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.810 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.810 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.810 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.068 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.327 00:21:21.327 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.327 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.327 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.586 { 00:21:21.586 "cntlid": 111, 00:21:21.586 "qid": 0, 00:21:21.586 "state": "enabled", 00:21:21.586 "thread": "nvmf_tgt_poll_group_000", 00:21:21.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.586 "listen_address": { 00:21:21.586 "trtype": "TCP", 00:21:21.586 "adrfam": "IPv4", 00:21:21.586 "traddr": "10.0.0.2", 00:21:21.586 "trsvcid": "4420" 00:21:21.586 }, 00:21:21.586 "peer_address": { 00:21:21.586 "trtype": "TCP", 00:21:21.586 "adrfam": "IPv4", 00:21:21.586 "traddr": "10.0.0.1", 00:21:21.586 "trsvcid": "48318" 00:21:21.586 }, 00:21:21.586 "auth": { 00:21:21.586 "state": "completed", 00:21:21.586 "digest": "sha512", 00:21:21.586 "dhgroup": "ffdhe2048" 00:21:21.586 } 00:21:21.586 } 00:21:21.586 ]' 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.586 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.844 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:21.844 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.410 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.668 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.927 00:21:22.927 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.927 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.927 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.186 { 00:21:23.186 "cntlid": 113, 00:21:23.186 "qid": 0, 00:21:23.186 "state": "enabled", 00:21:23.186 "thread": "nvmf_tgt_poll_group_000", 00:21:23.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.186 "listen_address": { 00:21:23.186 "trtype": "TCP", 00:21:23.186 "adrfam": "IPv4", 00:21:23.186 "traddr": "10.0.0.2", 00:21:23.186 "trsvcid": "4420" 00:21:23.186 }, 00:21:23.186 "peer_address": { 00:21:23.186 "trtype": "TCP", 00:21:23.186 "adrfam": "IPv4", 00:21:23.186 "traddr": "10.0.0.1", 00:21:23.186 "trsvcid": "48336" 00:21:23.186 }, 00:21:23.186 "auth": { 00:21:23.186 "state": "completed", 00:21:23.186 "digest": "sha512", 00:21:23.186 "dhgroup": "ffdhe3072" 00:21:23.186 } 00:21:23.186 } 00:21:23.186 ]' 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.186 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.497 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:23.498 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.150 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.409 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.667 { 00:21:24.667 "cntlid": 115, 00:21:24.667 "qid": 0, 00:21:24.667 "state": "enabled", 00:21:24.667 "thread": "nvmf_tgt_poll_group_000", 00:21:24.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.667 "listen_address": { 00:21:24.667 "trtype": "TCP", 00:21:24.667 "adrfam": "IPv4", 00:21:24.667 "traddr": "10.0.0.2", 00:21:24.667 "trsvcid": "4420" 00:21:24.667 }, 00:21:24.667 "peer_address": { 00:21:24.667 "trtype": "TCP", 00:21:24.667 "adrfam": "IPv4", 00:21:24.667 "traddr": "10.0.0.1", 00:21:24.667 "trsvcid": "48372" 00:21:24.667 }, 00:21:24.667 "auth": { 00:21:24.667 "state": "completed", 00:21:24.667 "digest": "sha512", 00:21:24.667 "dhgroup": "ffdhe3072" 00:21:24.667 } 00:21:24.667 } 00:21:24.667 ]' 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.667 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.926 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.926 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.926 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.926 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.926 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.184 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:25.184 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.749 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.007 00:21:26.007 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.007 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.007 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.265 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.265 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.265 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.265 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.265 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.265 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.265 { 00:21:26.265 "cntlid": 117, 00:21:26.265 "qid": 0, 00:21:26.265 "state": "enabled", 00:21:26.265 "thread": "nvmf_tgt_poll_group_000", 00:21:26.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.265 "listen_address": { 00:21:26.265 "trtype": "TCP", 00:21:26.265 "adrfam": "IPv4", 00:21:26.265 "traddr": "10.0.0.2", 00:21:26.265 "trsvcid": "4420" 00:21:26.265 }, 00:21:26.265 "peer_address": { 00:21:26.265 "trtype": "TCP", 00:21:26.265 "adrfam": "IPv4", 00:21:26.265 "traddr": "10.0.0.1", 00:21:26.265 "trsvcid": "48404" 00:21:26.265 }, 00:21:26.265 "auth": { 00:21:26.265 "state": "completed", 00:21:26.265 "digest": "sha512", 00:21:26.265 "dhgroup": "ffdhe3072" 00:21:26.265 } 00:21:26.265 } 00:21:26.265 ]' 00:21:26.265 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.265 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.265 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.523 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:26.523 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.523 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.523 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.523 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.523 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:26.523 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:27.096 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.096 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.096 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.096 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.096 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.096 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.096 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:27.096 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.354 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.612 00:21:27.612 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.612 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.612 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.870 { 00:21:27.870 "cntlid": 119, 00:21:27.870 "qid": 0, 00:21:27.870 "state": "enabled", 00:21:27.870 "thread": "nvmf_tgt_poll_group_000", 00:21:27.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.870 "listen_address": { 00:21:27.870 "trtype": "TCP", 00:21:27.870 "adrfam": "IPv4", 00:21:27.870 "traddr": "10.0.0.2", 00:21:27.870 "trsvcid": "4420" 00:21:27.870 }, 00:21:27.870 "peer_address": { 00:21:27.870 "trtype": "TCP", 00:21:27.870 "adrfam": "IPv4", 00:21:27.870 "traddr": "10.0.0.1", 00:21:27.870 "trsvcid": "48422" 00:21:27.870 }, 00:21:27.870 "auth": { 00:21:27.870 "state": "completed", 00:21:27.870 "digest": "sha512", 00:21:27.870 "dhgroup": "ffdhe3072" 00:21:27.870 } 00:21:27.870 } 00:21:27.870 ]' 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.870 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.128 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.128 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.128 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.128 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:28.128 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.699 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.957 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.214 00:21:29.214 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.214 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.214 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.472 { 00:21:29.472 "cntlid": 121, 00:21:29.472 "qid": 0, 00:21:29.472 "state": "enabled", 00:21:29.472 "thread": "nvmf_tgt_poll_group_000", 00:21:29.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.472 "listen_address": { 00:21:29.472 "trtype": "TCP", 00:21:29.472 "adrfam": "IPv4", 00:21:29.472 "traddr": "10.0.0.2", 00:21:29.472 "trsvcid": "4420" 00:21:29.472 }, 00:21:29.472 "peer_address": { 00:21:29.472 "trtype": "TCP", 00:21:29.472 "adrfam": "IPv4", 00:21:29.472 "traddr": "10.0.0.1", 00:21:29.472 "trsvcid": "48450" 00:21:29.472 }, 00:21:29.472 "auth": { 00:21:29.472 "state": "completed", 00:21:29.472 "digest": "sha512", 00:21:29.472 "dhgroup": "ffdhe4096" 00:21:29.472 } 00:21:29.472 } 00:21:29.472 ]' 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.472 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.731 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:29.731 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:30.297 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.297 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:30.297 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.297 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.297 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.297 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.297 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.297 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.556 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.815 00:21:30.815 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.815 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.815 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.073 { 00:21:31.073 "cntlid": 123, 00:21:31.073 "qid": 0, 00:21:31.073 "state": "enabled", 00:21:31.073 "thread": "nvmf_tgt_poll_group_000", 00:21:31.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.073 "listen_address": { 00:21:31.073 "trtype": "TCP", 00:21:31.073 "adrfam": "IPv4", 00:21:31.073 "traddr": "10.0.0.2", 00:21:31.073 "trsvcid": "4420" 00:21:31.073 }, 00:21:31.073 "peer_address": { 00:21:31.073 "trtype": "TCP", 00:21:31.073 "adrfam": "IPv4", 00:21:31.073 "traddr": "10.0.0.1", 00:21:31.073 "trsvcid": "50818" 00:21:31.073 }, 00:21:31.073 "auth": { 00:21:31.073 "state": "completed", 00:21:31.073 "digest": "sha512", 00:21:31.073 "dhgroup": "ffdhe4096" 00:21:31.073 } 00:21:31.073 } 00:21:31.073 ]' 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.073 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.332 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:31.332 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:31.898 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.898 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.898 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.898 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.898 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.898 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.898 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.898 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.157 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.415 00:21:32.415 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.415 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.415 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.674 { 00:21:32.674 "cntlid": 125, 00:21:32.674 "qid": 0, 00:21:32.674 "state": "enabled", 00:21:32.674 "thread": "nvmf_tgt_poll_group_000", 00:21:32.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.674 "listen_address": { 00:21:32.674 "trtype": "TCP", 00:21:32.674 "adrfam": "IPv4", 00:21:32.674 "traddr": "10.0.0.2", 00:21:32.674 "trsvcid": "4420" 00:21:32.674 }, 00:21:32.674 "peer_address": { 00:21:32.674 "trtype": "TCP", 00:21:32.674 "adrfam": "IPv4", 00:21:32.674 "traddr": "10.0.0.1", 00:21:32.674 "trsvcid": "50848" 00:21:32.674 }, 00:21:32.674 "auth": { 00:21:32.674 "state": "completed", 00:21:32.674 "digest": "sha512", 00:21:32.674 "dhgroup": "ffdhe4096" 00:21:32.674 } 00:21:32.674 } 00:21:32.674 ]' 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.674 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.932 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:32.932 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:33.499 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.499 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:33.499 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.499 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.499 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.499 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.499 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.499 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.757 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.758 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.758 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.758 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.758 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.016 00:21:34.016 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.016 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.016 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.274 { 00:21:34.274 "cntlid": 127, 00:21:34.274 "qid": 0, 00:21:34.274 "state": "enabled", 00:21:34.274 "thread": "nvmf_tgt_poll_group_000", 00:21:34.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.274 "listen_address": { 00:21:34.274 "trtype": "TCP", 00:21:34.274 "adrfam": "IPv4", 00:21:34.274 "traddr": "10.0.0.2", 00:21:34.274 "trsvcid": "4420" 00:21:34.274 }, 00:21:34.274 "peer_address": { 00:21:34.274 "trtype": "TCP", 00:21:34.274 "adrfam": "IPv4", 00:21:34.274 "traddr": "10.0.0.1", 00:21:34.274 "trsvcid": "50880" 00:21:34.274 }, 00:21:34.274 "auth": { 00:21:34.274 "state": "completed", 00:21:34.274 "digest": "sha512", 00:21:34.274 "dhgroup": "ffdhe4096" 00:21:34.274 } 00:21:34.274 } 00:21:34.274 ]' 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.274 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.274 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.274 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.274 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.274 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.274 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.532 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:34.532 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:35.098 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.099 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.099 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.099 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.099 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.099 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.099 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.099 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.099 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.357 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.615 00:21:35.615 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.615 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.615 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.874 { 00:21:35.874 "cntlid": 129, 00:21:35.874 "qid": 0, 00:21:35.874 "state": "enabled", 00:21:35.874 "thread": "nvmf_tgt_poll_group_000", 00:21:35.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.874 "listen_address": { 00:21:35.874 "trtype": "TCP", 00:21:35.874 "adrfam": "IPv4", 00:21:35.874 "traddr": "10.0.0.2", 00:21:35.874 "trsvcid": "4420" 00:21:35.874 }, 00:21:35.874 "peer_address": { 00:21:35.874 "trtype": "TCP", 00:21:35.874 "adrfam": "IPv4", 00:21:35.874 "traddr": "10.0.0.1", 00:21:35.874 "trsvcid": "50902" 00:21:35.874 }, 00:21:35.874 "auth": { 00:21:35.874 "state": "completed", 00:21:35.874 "digest": "sha512", 00:21:35.874 "dhgroup": "ffdhe6144" 00:21:35.874 } 00:21:35.874 } 00:21:35.874 ]' 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.874 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.132 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.132 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.132 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.132 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:36.132 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:36.698 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.698 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.698 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.698 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.698 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.698 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.698 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.698 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.957 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.216 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.474 { 00:21:37.474 "cntlid": 131, 00:21:37.474 "qid": 0, 00:21:37.474 "state": "enabled", 00:21:37.474 "thread": "nvmf_tgt_poll_group_000", 00:21:37.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.474 "listen_address": { 00:21:37.474 "trtype": "TCP", 00:21:37.474 "adrfam": "IPv4", 00:21:37.474 "traddr": "10.0.0.2", 00:21:37.474 "trsvcid": "4420" 00:21:37.474 }, 00:21:37.474 "peer_address": { 00:21:37.474 "trtype": "TCP", 00:21:37.474 "adrfam": "IPv4", 00:21:37.474 "traddr": "10.0.0.1", 00:21:37.474 "trsvcid": "50936" 00:21:37.474 }, 00:21:37.474 "auth": { 00:21:37.474 "state": "completed", 00:21:37.474 "digest": "sha512", 00:21:37.474 "dhgroup": "ffdhe6144" 00:21:37.474 } 00:21:37.474 } 00:21:37.474 ]' 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.474 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.731 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.732 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.732 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.732 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.732 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.990 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:37.990 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.555 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.121 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.121 { 00:21:39.121 "cntlid": 133, 00:21:39.121 "qid": 0, 00:21:39.121 "state": "enabled", 00:21:39.121 "thread": "nvmf_tgt_poll_group_000", 00:21:39.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.121 "listen_address": { 00:21:39.121 "trtype": "TCP", 00:21:39.121 "adrfam": "IPv4", 00:21:39.121 "traddr": "10.0.0.2", 00:21:39.121 "trsvcid": "4420" 00:21:39.121 }, 00:21:39.121 "peer_address": { 00:21:39.121 "trtype": "TCP", 00:21:39.121 "adrfam": "IPv4", 00:21:39.121 "traddr": "10.0.0.1", 00:21:39.121 "trsvcid": "50962" 00:21:39.121 }, 00:21:39.121 "auth": { 00:21:39.121 "state": "completed", 00:21:39.121 "digest": "sha512", 00:21:39.121 "dhgroup": "ffdhe6144" 00:21:39.121 } 00:21:39.121 } 00:21:39.121 ]' 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.121 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.379 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.379 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.379 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.379 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.379 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.637 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:39.637 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:40.200 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.201 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.201 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.201 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.201 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.201 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.766 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.766 { 00:21:40.766 "cntlid": 135, 00:21:40.766 "qid": 0, 00:21:40.766 "state": "enabled", 00:21:40.766 "thread": "nvmf_tgt_poll_group_000", 00:21:40.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.766 "listen_address": { 00:21:40.766 "trtype": "TCP", 00:21:40.766 "adrfam": "IPv4", 00:21:40.766 "traddr": "10.0.0.2", 00:21:40.766 "trsvcid": "4420" 00:21:40.766 }, 00:21:40.766 "peer_address": { 00:21:40.766 "trtype": "TCP", 00:21:40.766 "adrfam": "IPv4", 00:21:40.766 "traddr": "10.0.0.1", 00:21:40.766 "trsvcid": "34804" 00:21:40.766 }, 00:21:40.766 "auth": { 00:21:40.766 "state": "completed", 00:21:40.766 "digest": "sha512", 00:21:40.766 "dhgroup": "ffdhe6144" 00:21:40.766 } 00:21:40.766 } 00:21:40.766 ]' 00:21:40.766 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.024 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.024 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.024 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.024 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.024 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.024 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.024 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.282 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:41.282 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.848 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.415 00:21:42.415 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.415 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.415 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.673 { 00:21:42.673 "cntlid": 137, 00:21:42.673 "qid": 0, 00:21:42.673 "state": "enabled", 00:21:42.673 "thread": "nvmf_tgt_poll_group_000", 00:21:42.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:42.673 "listen_address": { 00:21:42.673 "trtype": "TCP", 00:21:42.673 "adrfam": "IPv4", 00:21:42.673 "traddr": "10.0.0.2", 00:21:42.673 "trsvcid": "4420" 00:21:42.673 }, 00:21:42.673 "peer_address": { 00:21:42.673 "trtype": "TCP", 00:21:42.673 "adrfam": "IPv4", 00:21:42.673 "traddr": "10.0.0.1", 00:21:42.673 "trsvcid": "34832" 00:21:42.673 }, 00:21:42.673 "auth": { 00:21:42.673 "state": "completed", 00:21:42.673 "digest": "sha512", 00:21:42.673 "dhgroup": "ffdhe8192" 00:21:42.673 } 00:21:42.673 } 00:21:42.673 ]' 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.673 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.931 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:42.931 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:43.497 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.497 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.497 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.497 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.497 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.497 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.497 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.497 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.755 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.323 00:21:44.323 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.323 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.323 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.323 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.323 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.323 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.323 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.323 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.323 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.323 { 00:21:44.323 "cntlid": 139, 00:21:44.323 "qid": 0, 00:21:44.323 "state": "enabled", 00:21:44.323 "thread": "nvmf_tgt_poll_group_000", 00:21:44.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.323 "listen_address": { 00:21:44.323 "trtype": "TCP", 00:21:44.323 "adrfam": "IPv4", 00:21:44.323 "traddr": "10.0.0.2", 00:21:44.324 "trsvcid": "4420" 00:21:44.324 }, 00:21:44.324 "peer_address": { 00:21:44.324 "trtype": "TCP", 00:21:44.324 "adrfam": "IPv4", 00:21:44.324 "traddr": "10.0.0.1", 00:21:44.324 "trsvcid": "34858" 00:21:44.324 }, 00:21:44.324 "auth": { 00:21:44.324 "state": "completed", 00:21:44.324 "digest": "sha512", 00:21:44.324 "dhgroup": "ffdhe8192" 00:21:44.324 } 00:21:44.324 } 00:21:44.324 ]' 00:21:44.324 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.583 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.583 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.583 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.583 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.583 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.583 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.583 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.843 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:44.843 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: --dhchap-ctrl-secret DHHC-1:02:OGVmYmNhMjM3OTU4YTJiZDJjMmIxOTIyZTk0OGM4YmY0MGRjZjE3ZjFkMGEzY2VhnQEAqQ==: 00:21:45.408 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.408 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.408 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.408 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.408 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.408 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.408 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.408 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.408 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.974 00:21:45.974 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.974 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.974 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.231 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.231 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.231 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.231 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.231 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.231 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.231 { 00:21:46.231 "cntlid": 141, 00:21:46.231 "qid": 0, 00:21:46.231 "state": "enabled", 00:21:46.231 "thread": "nvmf_tgt_poll_group_000", 00:21:46.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.231 "listen_address": { 00:21:46.231 "trtype": "TCP", 00:21:46.231 "adrfam": "IPv4", 00:21:46.231 "traddr": "10.0.0.2", 00:21:46.231 "trsvcid": "4420" 00:21:46.231 }, 00:21:46.231 "peer_address": { 00:21:46.231 "trtype": "TCP", 00:21:46.231 "adrfam": "IPv4", 00:21:46.231 "traddr": "10.0.0.1", 00:21:46.231 "trsvcid": "34888" 00:21:46.232 }, 00:21:46.232 "auth": { 00:21:46.232 "state": "completed", 00:21:46.232 "digest": "sha512", 00:21:46.232 "dhgroup": "ffdhe8192" 00:21:46.232 } 00:21:46.232 } 00:21:46.232 ]' 00:21:46.232 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.232 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.232 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.232 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.232 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.232 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.232 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.232 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.489 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:46.489 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:01:NzlmNjQ4ZTc1NmM1MjgwMDUzMzljMzU1ZDA4NjlhYzUcjk0E: 00:21:47.055 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.055 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.055 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.055 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.055 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.055 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.055 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.055 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.312 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:47.312 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.312 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.312 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:47.312 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.312 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.312 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:47.312 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.312 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.312 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.312 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.312 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.312 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.877 00:21:47.877 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.877 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.877 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.877 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.877 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.877 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.877 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.135 { 00:21:48.135 "cntlid": 143, 00:21:48.135 "qid": 0, 00:21:48.135 "state": "enabled", 00:21:48.135 "thread": "nvmf_tgt_poll_group_000", 00:21:48.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:48.135 "listen_address": { 00:21:48.135 "trtype": "TCP", 00:21:48.135 "adrfam": "IPv4", 00:21:48.135 "traddr": "10.0.0.2", 00:21:48.135 "trsvcid": "4420" 00:21:48.135 }, 00:21:48.135 "peer_address": { 00:21:48.135 "trtype": "TCP", 00:21:48.135 "adrfam": "IPv4", 00:21:48.135 "traddr": "10.0.0.1", 00:21:48.135 "trsvcid": "34926" 00:21:48.135 }, 00:21:48.135 "auth": { 00:21:48.135 "state": "completed", 00:21:48.135 "digest": "sha512", 00:21:48.135 "dhgroup": "ffdhe8192" 00:21:48.135 } 00:21:48.135 } 00:21:48.135 ]' 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.135 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.393 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:48.393 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:48.958 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.215 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.473 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.731 { 00:21:49.731 "cntlid": 145, 00:21:49.731 "qid": 0, 00:21:49.731 "state": "enabled", 00:21:49.731 "thread": "nvmf_tgt_poll_group_000", 00:21:49.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:49.731 "listen_address": { 00:21:49.731 "trtype": "TCP", 00:21:49.731 "adrfam": "IPv4", 00:21:49.731 "traddr": "10.0.0.2", 00:21:49.731 "trsvcid": "4420" 00:21:49.731 }, 00:21:49.731 "peer_address": { 00:21:49.731 "trtype": "TCP", 00:21:49.731 "adrfam": "IPv4", 00:21:49.731 "traddr": "10.0.0.1", 00:21:49.731 "trsvcid": "34964" 00:21:49.731 }, 00:21:49.731 "auth": { 00:21:49.731 "state": "completed", 00:21:49.731 "digest": "sha512", 00:21:49.731 "dhgroup": "ffdhe8192" 00:21:49.731 } 00:21:49.731 } 00:21:49.731 ]' 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.731 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.989 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.989 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.989 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.989 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.989 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.246 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:50.246 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjkxNGIyZjRmOGFjNzVmOTI3NzUxNWY2MGYyYWZiMmUxNTE2Njc2NDc0NjY5MTBkw+ZbxA==: --dhchap-ctrl-secret DHHC-1:03:MDdlZjBmMzczODE3NmQwODkxNmIwMjMwMTNlOTA0MzBjZDQ1OWVhMDBmNmVmYjRmYWJiOWZmZDdiM2EyM2U1Yy0Ef/Y=: 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:50.812 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:51.070 request: 00:21:51.070 { 00:21:51.070 "name": "nvme0", 00:21:51.070 "trtype": "tcp", 00:21:51.070 "traddr": "10.0.0.2", 00:21:51.070 "adrfam": "ipv4", 00:21:51.070 "trsvcid": "4420", 00:21:51.070 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:51.070 "prchk_reftag": false, 00:21:51.071 "prchk_guard": false, 00:21:51.071 "hdgst": false, 00:21:51.071 "ddgst": false, 00:21:51.071 "dhchap_key": "key2", 00:21:51.071 "allow_unrecognized_csi": false, 00:21:51.071 "method": "bdev_nvme_attach_controller", 00:21:51.071 "req_id": 1 00:21:51.071 } 00:21:51.071 Got JSON-RPC error response 00:21:51.071 response: 00:21:51.071 { 00:21:51.071 "code": -5, 00:21:51.071 "message": "Input/output error" 00:21:51.071 } 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:51.071 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:51.637 request: 00:21:51.637 { 00:21:51.637 "name": "nvme0", 00:21:51.637 "trtype": "tcp", 00:21:51.637 "traddr": "10.0.0.2", 00:21:51.637 "adrfam": "ipv4", 00:21:51.637 "trsvcid": "4420", 00:21:51.637 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:51.637 "prchk_reftag": false, 00:21:51.637 "prchk_guard": false, 00:21:51.637 "hdgst": false, 00:21:51.637 "ddgst": false, 00:21:51.637 "dhchap_key": "key1", 00:21:51.637 "dhchap_ctrlr_key": "ckey2", 00:21:51.637 "allow_unrecognized_csi": false, 00:21:51.637 "method": "bdev_nvme_attach_controller", 00:21:51.637 "req_id": 1 00:21:51.637 } 00:21:51.637 Got JSON-RPC error response 00:21:51.637 response: 00:21:51.637 { 00:21:51.637 "code": -5, 00:21:51.637 "message": "Input/output error" 00:21:51.637 } 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.637 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.202 request: 00:21:52.202 { 00:21:52.202 "name": "nvme0", 00:21:52.202 "trtype": "tcp", 00:21:52.202 "traddr": "10.0.0.2", 00:21:52.202 "adrfam": "ipv4", 00:21:52.202 "trsvcid": "4420", 00:21:52.202 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:52.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.202 "prchk_reftag": false, 00:21:52.202 "prchk_guard": false, 00:21:52.202 "hdgst": false, 00:21:52.202 "ddgst": false, 00:21:52.202 "dhchap_key": "key1", 00:21:52.202 "dhchap_ctrlr_key": "ckey1", 00:21:52.202 "allow_unrecognized_csi": false, 00:21:52.202 "method": "bdev_nvme_attach_controller", 00:21:52.202 "req_id": 1 00:21:52.202 } 00:21:52.202 Got JSON-RPC error response 00:21:52.202 response: 00:21:52.202 { 00:21:52.202 "code": -5, 00:21:52.202 "message": "Input/output error" 00:21:52.202 } 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3657127 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3657127 ']' 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3657127 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3657127 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3657127' 00:21:52.202 killing process with pid 3657127 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3657127 00:21:52.202 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3657127 00:21:53.574 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3678949 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3678949 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3678949 ']' 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.575 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3678949 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3678949 ']' 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.141 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.399 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.399 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:54.399 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:54.399 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.399 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.657 null0 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vlT 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.WI4 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WI4 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OvN 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.k0K ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k0K 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.WC6 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.NKI ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NKI 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3gj 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.915 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.480 nvme0n1 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.738 { 00:21:55.738 "cntlid": 1, 00:21:55.738 "qid": 0, 00:21:55.738 "state": "enabled", 00:21:55.738 "thread": "nvmf_tgt_poll_group_000", 00:21:55.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:55.738 "listen_address": { 00:21:55.738 "trtype": "TCP", 00:21:55.738 "adrfam": "IPv4", 00:21:55.738 "traddr": "10.0.0.2", 00:21:55.738 "trsvcid": "4420" 00:21:55.738 }, 00:21:55.738 "peer_address": { 00:21:55.738 "trtype": "TCP", 00:21:55.738 "adrfam": "IPv4", 00:21:55.738 "traddr": "10.0.0.1", 00:21:55.738 "trsvcid": "42540" 00:21:55.738 }, 00:21:55.738 "auth": { 00:21:55.738 "state": "completed", 00:21:55.738 "digest": "sha512", 00:21:55.738 "dhgroup": "ffdhe8192" 00:21:55.738 } 00:21:55.738 } 00:21:55.738 ]' 00:21:55.738 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.997 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.997 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.997 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.997 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.997 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.997 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.997 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.254 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:56.254 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:56.820 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.820 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.820 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.820 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.820 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.820 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.821 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:57.078 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.078 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.078 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.079 request: 00:21:57.079 { 00:21:57.079 "name": "nvme0", 00:21:57.079 "trtype": "tcp", 00:21:57.079 "traddr": "10.0.0.2", 00:21:57.079 "adrfam": "ipv4", 00:21:57.079 "trsvcid": "4420", 00:21:57.079 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:57.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:57.079 "prchk_reftag": false, 00:21:57.079 "prchk_guard": false, 00:21:57.079 "hdgst": false, 00:21:57.079 "ddgst": false, 00:21:57.079 "dhchap_key": "key3", 00:21:57.079 "allow_unrecognized_csi": false, 00:21:57.079 "method": "bdev_nvme_attach_controller", 00:21:57.079 "req_id": 1 00:21:57.079 } 00:21:57.079 Got JSON-RPC error response 00:21:57.079 response: 00:21:57.079 { 00:21:57.079 "code": -5, 00:21:57.079 "message": "Input/output error" 00:21:57.079 } 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:57.079 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.336 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.594 request: 00:21:57.594 { 00:21:57.594 "name": "nvme0", 00:21:57.594 "trtype": "tcp", 00:21:57.594 "traddr": "10.0.0.2", 00:21:57.594 "adrfam": "ipv4", 00:21:57.594 "trsvcid": "4420", 00:21:57.594 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:57.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:57.594 "prchk_reftag": false, 00:21:57.594 "prchk_guard": false, 00:21:57.594 "hdgst": false, 00:21:57.594 "ddgst": false, 00:21:57.594 "dhchap_key": "key3", 00:21:57.594 "allow_unrecognized_csi": false, 00:21:57.594 "method": "bdev_nvme_attach_controller", 00:21:57.594 "req_id": 1 00:21:57.594 } 00:21:57.594 Got JSON-RPC error response 00:21:57.594 response: 00:21:57.594 { 00:21:57.594 "code": -5, 00:21:57.594 "message": "Input/output error" 00:21:57.594 } 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.594 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.852 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.111 request: 00:21:58.111 { 00:21:58.111 "name": "nvme0", 00:21:58.111 "trtype": "tcp", 00:21:58.111 "traddr": "10.0.0.2", 00:21:58.111 "adrfam": "ipv4", 00:21:58.111 "trsvcid": "4420", 00:21:58.111 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:58.111 "prchk_reftag": false, 00:21:58.111 "prchk_guard": false, 00:21:58.111 "hdgst": false, 00:21:58.111 "ddgst": false, 00:21:58.111 "dhchap_key": "key0", 00:21:58.111 "dhchap_ctrlr_key": "key1", 00:21:58.111 "allow_unrecognized_csi": false, 00:21:58.111 "method": "bdev_nvme_attach_controller", 00:21:58.111 "req_id": 1 00:21:58.111 } 00:21:58.111 Got JSON-RPC error response 00:21:58.111 response: 00:21:58.111 { 00:21:58.111 "code": -5, 00:21:58.111 "message": "Input/output error" 00:21:58.111 } 00:21:58.111 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:58.111 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.111 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.111 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.111 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:58.111 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:58.111 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:58.369 nvme0n1 00:21:58.369 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:58.369 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:58.369 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.626 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.626 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.626 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.884 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:58.884 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.884 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.884 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.884 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:58.885 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:58.885 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:59.450 nvme0n1 00:21:59.450 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:59.450 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:59.450 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.708 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.708 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:59.708 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.708 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.708 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.708 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:59.708 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:59.708 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.966 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.966 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:21:59.966 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: --dhchap-ctrl-secret DHHC-1:03:MGMyN2Q4NWQxY2M5YzhlYmJjYTA2ZGJmNDc4MmZhODE1YjcxMzlhOTMyNjdkMzY4NWI3OTY5NGU5MTAxZTIxY7Q4/4U=: 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.532 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:00.790 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:01.048 request: 00:22:01.048 { 00:22:01.048 "name": "nvme0", 00:22:01.048 "trtype": "tcp", 00:22:01.048 "traddr": "10.0.0.2", 00:22:01.048 "adrfam": "ipv4", 00:22:01.048 "trsvcid": "4420", 00:22:01.048 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:01.048 "prchk_reftag": false, 00:22:01.048 "prchk_guard": false, 00:22:01.048 "hdgst": false, 00:22:01.048 "ddgst": false, 00:22:01.048 "dhchap_key": "key1", 00:22:01.048 "allow_unrecognized_csi": false, 00:22:01.048 "method": "bdev_nvme_attach_controller", 00:22:01.048 "req_id": 1 00:22:01.048 } 00:22:01.048 Got JSON-RPC error response 00:22:01.048 response: 00:22:01.048 { 00:22:01.048 "code": -5, 00:22:01.048 "message": "Input/output error" 00:22:01.048 } 00:22:01.048 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:01.048 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.048 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.048 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.048 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.048 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.048 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.753 nvme0n1 00:22:01.753 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:01.753 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:01.753 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.030 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.030 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.030 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.289 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:02.289 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.289 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.289 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.289 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:02.289 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:02.289 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:02.547 nvme0n1 00:22:02.547 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:02.547 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:02.547 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.805 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.805 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.805 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.805 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:02.805 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: '' 2s 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: ]] 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2I5OTRlZmFiMzYxMWNlZTc2OTlkYjI2MzU5YmFhNjKuZ7sa: 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:02.806 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: 2s 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: ]] 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzU1YmU1N2E2MTZmYWUxNGFlYThhZGExNmQzZTVlNmFmZDMyMTI2MTBiMzhkNzEz8XMmXA==: 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:05.333 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:07.231 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:07.797 nvme0n1 00:22:07.797 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:07.797 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.797 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.797 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.797 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:07.797 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.362 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:08.362 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:08.362 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.362 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.362 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.362 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.362 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.362 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.362 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:08.362 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:08.619 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:08.619 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:08.619 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:08.876 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:09.441 request: 00:22:09.441 { 00:22:09.441 "name": "nvme0", 00:22:09.441 "dhchap_key": "key1", 00:22:09.441 "dhchap_ctrlr_key": "key3", 00:22:09.441 "method": "bdev_nvme_set_keys", 00:22:09.441 "req_id": 1 00:22:09.441 } 00:22:09.441 Got JSON-RPC error response 00:22:09.441 response: 00:22:09.441 { 00:22:09.441 "code": -13, 00:22:09.441 "message": "Permission denied" 00:22:09.441 } 00:22:09.441 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:09.441 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.441 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.441 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.441 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:09.441 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:09.441 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.441 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:09.441 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:10.374 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:10.374 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:10.374 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.632 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:10.632 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.632 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.632 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.632 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.632 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:10.632 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:10.632 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:11.565 nvme0n1 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:11.565 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:11.821 request: 00:22:11.821 { 00:22:11.821 "name": "nvme0", 00:22:11.821 "dhchap_key": "key2", 00:22:11.821 "dhchap_ctrlr_key": "key0", 00:22:11.821 "method": "bdev_nvme_set_keys", 00:22:11.821 "req_id": 1 00:22:11.821 } 00:22:11.821 Got JSON-RPC error response 00:22:11.821 response: 00:22:11.821 { 00:22:11.821 "code": -13, 00:22:11.821 "message": "Permission denied" 00:22:11.821 } 00:22:11.821 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:11.821 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.821 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.821 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.821 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:11.821 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:11.821 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.079 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:12.079 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:13.011 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:13.011 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:13.011 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3657364 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3657364 ']' 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3657364 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3657364 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3657364' 00:22:13.269 killing process with pid 3657364 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3657364 00:22:13.269 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3657364 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:15.797 rmmod nvme_tcp 00:22:15.797 rmmod nvme_fabrics 00:22:15.797 rmmod nvme_keyring 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3678949 ']' 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3678949 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3678949 ']' 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3678949 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3678949 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3678949' 00:22:15.797 killing process with pid 3678949 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3678949 00:22:15.797 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3678949 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.171 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vlT /tmp/spdk.key-sha256.OvN /tmp/spdk.key-sha384.WC6 /tmp/spdk.key-sha512.3gj /tmp/spdk.key-sha512.WI4 /tmp/spdk.key-sha384.k0K /tmp/spdk.key-sha256.NKI '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:19.071 00:22:19.071 real 2m36.197s 00:22:19.071 user 5m57.204s 00:22:19.071 sys 0m23.447s 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.071 ************************************ 00:22:19.071 END TEST nvmf_auth_target 00:22:19.071 ************************************ 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:19.071 ************************************ 00:22:19.071 START TEST nvmf_bdevio_no_huge 00:22:19.071 ************************************ 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:19.071 * Looking for test storage... 00:22:19.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.071 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:19.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.072 --rc genhtml_branch_coverage=1 00:22:19.072 --rc genhtml_function_coverage=1 00:22:19.072 --rc genhtml_legend=1 00:22:19.072 --rc geninfo_all_blocks=1 00:22:19.072 --rc geninfo_unexecuted_blocks=1 00:22:19.072 00:22:19.072 ' 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:19.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.072 --rc genhtml_branch_coverage=1 00:22:19.072 --rc genhtml_function_coverage=1 00:22:19.072 --rc genhtml_legend=1 00:22:19.072 --rc geninfo_all_blocks=1 00:22:19.072 --rc geninfo_unexecuted_blocks=1 00:22:19.072 00:22:19.072 ' 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:19.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.072 --rc genhtml_branch_coverage=1 00:22:19.072 --rc genhtml_function_coverage=1 00:22:19.072 --rc genhtml_legend=1 00:22:19.072 --rc geninfo_all_blocks=1 00:22:19.072 --rc geninfo_unexecuted_blocks=1 00:22:19.072 00:22:19.072 ' 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:19.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.072 --rc genhtml_branch_coverage=1 00:22:19.072 --rc genhtml_function_coverage=1 00:22:19.072 --rc genhtml_legend=1 00:22:19.072 --rc geninfo_all_blocks=1 00:22:19.072 --rc geninfo_unexecuted_blocks=1 00:22:19.072 00:22:19.072 ' 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.072 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.331 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:24.594 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:24.594 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:24.594 Found net devices under 0000:af:00.0: cvl_0_0 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.594 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:24.595 Found net devices under 0000:af:00.1: cvl_0_1 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.595 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.852 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.852 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.852 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.852 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:22:24.853 00:22:24.853 --- 10.0.0.2 ping statistics --- 00:22:24.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.853 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:22:24.853 00:22:24.853 --- 10.0.0.1 ping statistics --- 00:22:24.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.853 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3686189 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3686189 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3686189 ']' 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.853 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.111 [2024-12-10 12:25:31.705259] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:25.111 [2024-12-10 12:25:31.705368] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:25.111 [2024-12-10 12:25:31.840264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.368 [2024-12-10 12:25:31.961614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.368 [2024-12-10 12:25:31.961661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.368 [2024-12-10 12:25:31.961671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.368 [2024-12-10 12:25:31.961682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.368 [2024-12-10 12:25:31.961690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.368 [2024-12-10 12:25:31.963755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:25.368 [2024-12-10 12:25:31.963859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:25.368 [2024-12-10 12:25:31.963936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.368 [2024-12-10 12:25:31.963960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.932 [2024-12-10 12:25:32.554674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.932 Malloc0 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.932 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.933 [2024-12-10 12:25:32.656285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.933 { 00:22:25.933 "params": { 00:22:25.933 "name": "Nvme$subsystem", 00:22:25.933 "trtype": "$TEST_TRANSPORT", 00:22:25.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.933 "adrfam": "ipv4", 00:22:25.933 "trsvcid": "$NVMF_PORT", 00:22:25.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.933 "hdgst": ${hdgst:-false}, 00:22:25.933 "ddgst": ${ddgst:-false} 00:22:25.933 }, 00:22:25.933 "method": "bdev_nvme_attach_controller" 00:22:25.933 } 00:22:25.933 EOF 00:22:25.933 )") 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:25.933 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:25.933 "params": { 00:22:25.933 "name": "Nvme1", 00:22:25.933 "trtype": "tcp", 00:22:25.933 "traddr": "10.0.0.2", 00:22:25.933 "adrfam": "ipv4", 00:22:25.933 "trsvcid": "4420", 00:22:25.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.933 "hdgst": false, 00:22:25.933 "ddgst": false 00:22:25.933 }, 00:22:25.933 "method": "bdev_nvme_attach_controller" 00:22:25.933 }' 00:22:25.933 [2024-12-10 12:25:32.732305] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:25.933 [2024-12-10 12:25:32.732416] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3686430 ] 00:22:26.190 [2024-12-10 12:25:32.860897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:26.190 [2024-12-10 12:25:32.972650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.190 [2024-12-10 12:25:32.972658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.190 [2024-12-10 12:25:32.972667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.753 I/O targets: 00:22:26.753 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:26.753 00:22:26.753 00:22:26.753 CUnit - A unit testing framework for C - Version 2.1-3 00:22:26.753 http://cunit.sourceforge.net/ 00:22:26.753 00:22:26.753 00:22:26.753 Suite: bdevio tests on: Nvme1n1 00:22:26.753 Test: blockdev write read block ...passed 00:22:26.753 Test: blockdev write zeroes read block ...passed 00:22:26.753 Test: blockdev write zeroes read no split ...passed 00:22:27.011 Test: blockdev write zeroes read split ...passed 00:22:27.011 Test: blockdev write zeroes read split partial ...passed 00:22:27.011 Test: blockdev reset ...[2024-12-10 12:25:33.665942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:27.011 [2024-12-10 12:25:33.666048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000323a00 (9): Bad file descriptor 00:22:27.011 [2024-12-10 12:25:33.686212] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:27.011 passed 00:22:27.011 Test: blockdev write read 8 blocks ...passed 00:22:27.011 Test: blockdev write read size > 128k ...passed 00:22:27.011 Test: blockdev write read invalid size ...passed 00:22:27.011 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:27.011 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:27.011 Test: blockdev write read max offset ...passed 00:22:27.011 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:27.011 Test: blockdev writev readv 8 blocks ...passed 00:22:27.011 Test: blockdev writev readv 30 x 1block ...passed 00:22:27.269 Test: blockdev writev readv block ...passed 00:22:27.269 Test: blockdev writev readv size > 128k ...passed 00:22:27.269 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:27.269 Test: blockdev comparev and writev ...[2024-12-10 12:25:33.858750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.269 [2024-12-10 12:25:33.858795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.858815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.269 [2024-12-10 12:25:33.858827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.859122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.269 [2024-12-10 12:25:33.859143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.859160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.269 [2024-12-10 12:25:33.859176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.859478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.269 [2024-12-10 12:25:33.859494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.859514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.269 [2024-12-10 12:25:33.859525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.859800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.269 [2024-12-10 12:25:33.859815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.859831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.269 [2024-12-10 12:25:33.859845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:27.269 passed 00:22:27.269 Test: blockdev nvme passthru rw ...passed 00:22:27.269 Test: blockdev nvme passthru vendor specific ...[2024-12-10 12:25:33.941584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.269 [2024-12-10 12:25:33.941618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.941748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.269 [2024-12-10 12:25:33.941762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.941878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.269 [2024-12-10 12:25:33.941891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:27.269 [2024-12-10 12:25:33.942025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.269 [2024-12-10 12:25:33.942040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:27.269 passed 00:22:27.269 Test: blockdev nvme admin passthru ...passed 00:22:27.269 Test: blockdev copy ...passed 00:22:27.269 00:22:27.269 Run Summary: Type Total Ran Passed Failed Inactive 00:22:27.269 suites 1 1 n/a 0 0 00:22:27.269 tests 23 23 23 0 0 00:22:27.269 asserts 152 152 152 0 n/a 00:22:27.269 00:22:27.269 Elapsed time = 1.176 seconds 00:22:27.834 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.834 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.834 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.834 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.834 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:27.834 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:27.834 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:27.834 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.092 rmmod nvme_tcp 00:22:28.092 rmmod nvme_fabrics 00:22:28.092 rmmod nvme_keyring 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3686189 ']' 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3686189 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3686189 ']' 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3686189 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:28.092 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.093 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3686189 00:22:28.093 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:28.093 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:28.093 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3686189' 00:22:28.093 killing process with pid 3686189 00:22:28.093 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3686189 00:22:28.093 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3686189 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.026 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.928 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:30.928 00:22:30.928 real 0m11.892s 00:22:30.928 user 0m19.368s 00:22:30.928 sys 0m5.464s 00:22:30.928 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.928 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.928 ************************************ 00:22:30.928 END TEST nvmf_bdevio_no_huge 00:22:30.928 ************************************ 00:22:30.928 12:25:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:30.928 12:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:30.928 12:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.928 12:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:30.928 ************************************ 00:22:30.928 START TEST nvmf_tls 00:22:30.928 ************************************ 00:22:30.928 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:31.186 * Looking for test storage... 00:22:31.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:31.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.186 --rc genhtml_branch_coverage=1 00:22:31.186 --rc genhtml_function_coverage=1 00:22:31.186 --rc genhtml_legend=1 00:22:31.186 --rc geninfo_all_blocks=1 00:22:31.186 --rc geninfo_unexecuted_blocks=1 00:22:31.186 00:22:31.186 ' 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:31.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.186 --rc genhtml_branch_coverage=1 00:22:31.186 --rc genhtml_function_coverage=1 00:22:31.186 --rc genhtml_legend=1 00:22:31.186 --rc geninfo_all_blocks=1 00:22:31.186 --rc geninfo_unexecuted_blocks=1 00:22:31.186 00:22:31.186 ' 00:22:31.186 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:31.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.186 --rc genhtml_branch_coverage=1 00:22:31.186 --rc genhtml_function_coverage=1 00:22:31.186 --rc genhtml_legend=1 00:22:31.186 --rc geninfo_all_blocks=1 00:22:31.187 --rc geninfo_unexecuted_blocks=1 00:22:31.187 00:22:31.187 ' 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.187 --rc genhtml_branch_coverage=1 00:22:31.187 --rc genhtml_function_coverage=1 00:22:31.187 --rc genhtml_legend=1 00:22:31.187 --rc geninfo_all_blocks=1 00:22:31.187 --rc geninfo_unexecuted_blocks=1 00:22:31.187 00:22:31.187 ' 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.187 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:36.451 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:36.451 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:36.451 Found net devices under 0000:af:00.0: cvl_0_0 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:36.451 Found net devices under 0000:af:00.1: cvl_0_1 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.451 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:22:36.710 00:22:36.710 --- 10.0.0.2 ping statistics --- 00:22:36.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.710 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:36.710 00:22:36.710 --- 10.0.0.1 ping statistics --- 00:22:36.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.710 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3690319 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3690319 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3690319 ']' 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.710 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.710 [2024-12-10 12:25:43.466971] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:36.710 [2024-12-10 12:25:43.467060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.969 [2024-12-10 12:25:43.584284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.969 [2024-12-10 12:25:43.685359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.969 [2024-12-10 12:25:43.685404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.969 [2024-12-10 12:25:43.685414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.969 [2024-12-10 12:25:43.685424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.969 [2024-12-10 12:25:43.685432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.969 [2024-12-10 12:25:43.686742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.535 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.535 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:37.535 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:37.535 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.535 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.535 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.535 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:37.535 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:37.792 true 00:22:37.793 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:37.793 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:38.050 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:38.051 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:38.051 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:38.051 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:38.051 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:38.309 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:38.309 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:38.309 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:38.567 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:38.567 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:38.825 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:38.825 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:38.825 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:38.825 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:38.825 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:38.825 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:38.825 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:39.083 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:39.083 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:39.341 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:39.341 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:39.341 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:39.341 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:39.341 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.K7A6pHWw9I 00:22:39.600 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:39.858 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Gwx0sCgCQC 00:22:39.858 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:39.858 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:39.858 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.K7A6pHWw9I 00:22:39.858 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Gwx0sCgCQC 00:22:39.858 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:39.858 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:40.424 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.K7A6pHWw9I 00:22:40.424 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.K7A6pHWw9I 00:22:40.424 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:40.683 [2024-12-10 12:25:47.300867] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.683 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:40.683 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:40.941 [2024-12-10 12:25:47.673782] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.941 [2024-12-10 12:25:47.674028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.941 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:41.199 malloc0 00:22:41.199 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:41.456 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.K7A6pHWw9I 00:22:41.456 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.714 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.K7A6pHWw9I 00:22:53.915 Initializing NVMe Controllers 00:22:53.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:53.915 Initialization complete. Launching workers. 00:22:53.915 ======================================================== 00:22:53.915 Latency(us) 00:22:53.915 Device Information : IOPS MiB/s Average min max 00:22:53.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12987.39 50.73 4928.21 1263.99 6644.48 00:22:53.915 ======================================================== 00:22:53.915 Total : 12987.39 50.73 4928.21 1263.99 6644.48 00:22:53.915 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.K7A6pHWw9I 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.K7A6pHWw9I 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3692847 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3692847 /var/tmp/bdevperf.sock 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3692847 ']' 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.915 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.915 [2024-12-10 12:25:58.725941] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:53.915 [2024-12-10 12:25:58.726032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692847 ] 00:22:53.915 [2024-12-10 12:25:58.834367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.915 [2024-12-10 12:25:58.944259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.915 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.915 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.915 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.K7A6pHWw9I 00:22:53.915 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.915 [2024-12-10 12:25:59.888731] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.915 TLSTESTn1 00:22:53.915 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.915 Running I/O for 10 seconds... 00:22:55.414 4300.00 IOPS, 16.80 MiB/s [2024-12-10T11:26:03.172Z] 4372.00 IOPS, 17.08 MiB/s [2024-12-10T11:26:04.106Z] 4447.00 IOPS, 17.37 MiB/s [2024-12-10T11:26:05.548Z] 4519.75 IOPS, 17.66 MiB/s [2024-12-10T11:26:06.150Z] 4537.80 IOPS, 17.73 MiB/s [2024-12-10T11:26:07.520Z] 4566.67 IOPS, 17.84 MiB/s [2024-12-10T11:26:08.452Z] 4566.86 IOPS, 17.84 MiB/s [2024-12-10T11:26:09.383Z] 4582.00 IOPS, 17.90 MiB/s [2024-12-10T11:26:10.314Z] 4601.22 IOPS, 17.97 MiB/s [2024-12-10T11:26:10.314Z] 4608.00 IOPS, 18.00 MiB/s 00:23:03.488 Latency(us) 00:23:03.488 [2024-12-10T11:26:10.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.488 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.488 Verification LBA range: start 0x0 length 0x2000 00:23:03.488 TLSTESTn1 : 10.02 4611.77 18.01 0.00 0.00 27708.79 6459.98 32206.26 00:23:03.488 [2024-12-10T11:26:10.314Z] =================================================================================================================== 00:23:03.488 [2024-12-10T11:26:10.314Z] Total : 4611.77 18.01 0.00 0.00 27708.79 6459.98 32206.26 00:23:03.488 { 00:23:03.488 "results": [ 00:23:03.488 { 00:23:03.488 "job": "TLSTESTn1", 00:23:03.488 "core_mask": "0x4", 00:23:03.488 "workload": "verify", 00:23:03.488 "status": "finished", 00:23:03.488 "verify_range": { 00:23:03.488 "start": 0, 00:23:03.488 "length": 8192 00:23:03.488 }, 00:23:03.488 "queue_depth": 128, 00:23:03.488 "io_size": 4096, 00:23:03.488 "runtime": 10.019152, 00:23:03.488 "iops": 4611.767542802026, 00:23:03.488 "mibps": 18.014716964070413, 00:23:03.488 "io_failed": 0, 00:23:03.488 "io_timeout": 0, 00:23:03.488 "avg_latency_us": 27708.785408945034, 00:23:03.488 "min_latency_us": 6459.977142857143, 00:23:03.488 "max_latency_us": 32206.262857142858 00:23:03.488 } 00:23:03.488 ], 00:23:03.488 "core_count": 1 00:23:03.488 } 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3692847 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3692847 ']' 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3692847 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3692847 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3692847' 00:23:03.488 killing process with pid 3692847 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3692847 00:23:03.488 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.488 00:23:03.488 Latency(us) 00:23:03.488 [2024-12-10T11:26:10.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.488 [2024-12-10T11:26:10.314Z] =================================================================================================================== 00:23:03.488 [2024-12-10T11:26:10.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.488 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3692847 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Gwx0sCgCQC 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Gwx0sCgCQC 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Gwx0sCgCQC 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Gwx0sCgCQC 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3694785 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3694785 /var/tmp/bdevperf.sock 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3694785 ']' 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.422 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.422 [2024-12-10 12:26:11.187774] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:04.422 [2024-12-10 12:26:11.187879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3694785 ] 00:23:04.680 [2024-12-10 12:26:11.294803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.680 [2024-12-10 12:26:11.399804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.244 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.244 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:05.244 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Gwx0sCgCQC 00:23:05.501 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:05.758 [2024-12-10 12:26:12.335605] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.758 [2024-12-10 12:26:12.343823] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:05.758 [2024-12-10 12:26:12.344681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:23:05.758 [2024-12-10 12:26:12.345661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:23:05.758 [2024-12-10 12:26:12.346656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:05.758 [2024-12-10 12:26:12.346681] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:05.758 [2024-12-10 12:26:12.346695] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:05.758 [2024-12-10 12:26:12.346713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:05.758 request: 00:23:05.758 { 00:23:05.758 "name": "TLSTEST", 00:23:05.758 "trtype": "tcp", 00:23:05.758 "traddr": "10.0.0.2", 00:23:05.758 "adrfam": "ipv4", 00:23:05.758 "trsvcid": "4420", 00:23:05.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.758 "prchk_reftag": false, 00:23:05.758 "prchk_guard": false, 00:23:05.758 "hdgst": false, 00:23:05.758 "ddgst": false, 00:23:05.758 "psk": "key0", 00:23:05.758 "allow_unrecognized_csi": false, 00:23:05.758 "method": "bdev_nvme_attach_controller", 00:23:05.758 "req_id": 1 00:23:05.758 } 00:23:05.758 Got JSON-RPC error response 00:23:05.758 response: 00:23:05.758 { 00:23:05.758 "code": -5, 00:23:05.758 "message": "Input/output error" 00:23:05.758 } 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3694785 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3694785 ']' 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3694785 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3694785 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3694785' 00:23:05.758 killing process with pid 3694785 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3694785 00:23:05.758 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.758 00:23:05.758 Latency(us) 00:23:05.758 [2024-12-10T11:26:12.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.758 [2024-12-10T11:26:12.584Z] =================================================================================================================== 00:23:05.758 [2024-12-10T11:26:12.584Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.758 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3694785 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.K7A6pHWw9I 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.K7A6pHWw9I 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.K7A6pHWw9I 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.K7A6pHWw9I 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3695105 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3695105 /var/tmp/bdevperf.sock 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3695105 ']' 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.690 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.690 [2024-12-10 12:26:13.374371] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:06.690 [2024-12-10 12:26:13.374465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695105 ] 00:23:06.690 [2024-12-10 12:26:13.480340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.948 [2024-12-10 12:26:13.590461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.512 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.512 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.512 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.K7A6pHWw9I 00:23:07.769 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:07.769 [2024-12-10 12:26:14.524485] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.769 [2024-12-10 12:26:14.531935] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.769 [2024-12-10 12:26:14.531965] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.770 [2024-12-10 12:26:14.532002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.770 [2024-12-10 12:26:14.532308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:23:07.770 [2024-12-10 12:26:14.533289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:23:07.770 [2024-12-10 12:26:14.534290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:07.770 [2024-12-10 12:26:14.534323] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.770 [2024-12-10 12:26:14.534338] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:07.770 [2024-12-10 12:26:14.534353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:07.770 request: 00:23:07.770 { 00:23:07.770 "name": "TLSTEST", 00:23:07.770 "trtype": "tcp", 00:23:07.770 "traddr": "10.0.0.2", 00:23:07.770 "adrfam": "ipv4", 00:23:07.770 "trsvcid": "4420", 00:23:07.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.770 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.770 "prchk_reftag": false, 00:23:07.770 "prchk_guard": false, 00:23:07.770 "hdgst": false, 00:23:07.770 "ddgst": false, 00:23:07.770 "psk": "key0", 00:23:07.770 "allow_unrecognized_csi": false, 00:23:07.770 "method": "bdev_nvme_attach_controller", 00:23:07.770 "req_id": 1 00:23:07.770 } 00:23:07.770 Got JSON-RPC error response 00:23:07.770 response: 00:23:07.770 { 00:23:07.770 "code": -5, 00:23:07.770 "message": "Input/output error" 00:23:07.770 } 00:23:07.770 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3695105 00:23:07.770 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3695105 ']' 00:23:07.770 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3695105 00:23:07.770 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.770 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.770 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3695105 00:23:08.027 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.027 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.027 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3695105' 00:23:08.027 killing process with pid 3695105 00:23:08.027 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3695105 00:23:08.027 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.027 00:23:08.027 Latency(us) 00:23:08.027 [2024-12-10T11:26:14.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.027 [2024-12-10T11:26:14.853Z] =================================================================================================================== 00:23:08.027 [2024-12-10T11:26:14.853Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.027 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3695105 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.K7A6pHWw9I 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.K7A6pHWw9I 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.K7A6pHWw9I 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.K7A6pHWw9I 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3695553 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3695553 /var/tmp/bdevperf.sock 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3695553 ']' 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.959 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.959 [2024-12-10 12:26:15.552812] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:08.959 [2024-12-10 12:26:15.552912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695553 ] 00:23:08.959 [2024-12-10 12:26:15.658317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.959 [2024-12-10 12:26:15.765519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.891 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.891 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.891 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.K7A6pHWw9I 00:23:09.891 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.149 [2024-12-10 12:26:16.727767] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.149 [2024-12-10 12:26:16.735081] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:10.149 [2024-12-10 12:26:16.735108] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:10.149 [2024-12-10 12:26:16.735145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.149 [2024-12-10 12:26:16.735524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:23:10.149 [2024-12-10 12:26:16.736504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:23:10.149 [2024-12-10 12:26:16.737507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:10.149 [2024-12-10 12:26:16.737531] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:10.149 [2024-12-10 12:26:16.737546] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:10.149 [2024-12-10 12:26:16.737561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:10.149 request: 00:23:10.149 { 00:23:10.149 "name": "TLSTEST", 00:23:10.149 "trtype": "tcp", 00:23:10.149 "traddr": "10.0.0.2", 00:23:10.149 "adrfam": "ipv4", 00:23:10.149 "trsvcid": "4420", 00:23:10.149 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.149 "prchk_reftag": false, 00:23:10.149 "prchk_guard": false, 00:23:10.149 "hdgst": false, 00:23:10.149 "ddgst": false, 00:23:10.149 "psk": "key0", 00:23:10.149 "allow_unrecognized_csi": false, 00:23:10.149 "method": "bdev_nvme_attach_controller", 00:23:10.149 "req_id": 1 00:23:10.149 } 00:23:10.149 Got JSON-RPC error response 00:23:10.149 response: 00:23:10.149 { 00:23:10.149 "code": -5, 00:23:10.149 "message": "Input/output error" 00:23:10.149 } 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3695553 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3695553 ']' 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3695553 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3695553 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3695553' 00:23:10.149 killing process with pid 3695553 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3695553 00:23:10.149 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.149 00:23:10.149 Latency(us) 00:23:10.149 [2024-12-10T11:26:16.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.149 [2024-12-10T11:26:16.975Z] =================================================================================================================== 00:23:10.149 [2024-12-10T11:26:16.975Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3695553 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3695796 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3695796 /var/tmp/bdevperf.sock 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3695796 ']' 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.084 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.084 [2024-12-10 12:26:17.757805] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:11.084 [2024-12-10 12:26:17.757911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695796 ] 00:23:11.084 [2024-12-10 12:26:17.865631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.342 [2024-12-10 12:26:17.977444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:12.166 [2024-12-10 12:26:18.735524] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:12.166 [2024-12-10 12:26:18.735567] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:12.166 request: 00:23:12.166 { 00:23:12.166 "name": "key0", 00:23:12.166 "path": "", 00:23:12.166 "method": "keyring_file_add_key", 00:23:12.166 "req_id": 1 00:23:12.166 } 00:23:12.166 Got JSON-RPC error response 00:23:12.166 response: 00:23:12.166 { 00:23:12.166 "code": -1, 00:23:12.166 "message": "Operation not permitted" 00:23:12.166 } 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.166 [2024-12-10 12:26:18.912139] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.166 [2024-12-10 12:26:18.912208] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:12.166 request: 00:23:12.166 { 00:23:12.166 "name": "TLSTEST", 00:23:12.166 "trtype": "tcp", 00:23:12.166 "traddr": "10.0.0.2", 00:23:12.166 "adrfam": "ipv4", 00:23:12.166 "trsvcid": "4420", 00:23:12.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.166 "prchk_reftag": false, 00:23:12.166 "prchk_guard": false, 00:23:12.166 "hdgst": false, 00:23:12.166 "ddgst": false, 00:23:12.166 "psk": "key0", 00:23:12.166 "allow_unrecognized_csi": false, 00:23:12.166 "method": "bdev_nvme_attach_controller", 00:23:12.166 "req_id": 1 00:23:12.166 } 00:23:12.166 Got JSON-RPC error response 00:23:12.166 response: 00:23:12.166 { 00:23:12.166 "code": -126, 00:23:12.166 "message": "Required key not available" 00:23:12.166 } 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3695796 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3695796 ']' 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3695796 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3695796 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3695796' 00:23:12.166 killing process with pid 3695796 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3695796 00:23:12.166 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.166 00:23:12.166 Latency(us) 00:23:12.166 [2024-12-10T11:26:18.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.166 [2024-12-10T11:26:18.992Z] =================================================================================================================== 00:23:12.166 [2024-12-10T11:26:18.992Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3695796 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3690319 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3690319 ']' 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3690319 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3690319 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3690319' 00:23:13.100 killing process with pid 3690319 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3690319 00:23:13.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3690319 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.j8KxG0QlrY 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.j8KxG0QlrY 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3696484 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3696484 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3696484 ']' 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.476 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.734 [2024-12-10 12:26:21.318474] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:14.734 [2024-12-10 12:26:21.318563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.734 [2024-12-10 12:26:21.431509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.734 [2024-12-10 12:26:21.532955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.735 [2024-12-10 12:26:21.533003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.735 [2024-12-10 12:26:21.533013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.735 [2024-12-10 12:26:21.533023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.735 [2024-12-10 12:26:21.533031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.735 [2024-12-10 12:26:21.534518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.j8KxG0QlrY 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.j8KxG0QlrY 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:15.668 [2024-12-10 12:26:22.345318] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.668 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:15.926 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:15.926 [2024-12-10 12:26:22.706293] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.927 [2024-12-10 12:26:22.706570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.927 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:16.185 malloc0 00:23:16.185 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:16.443 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j8KxG0QlrY 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.j8KxG0QlrY 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3696754 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3696754 /var/tmp/bdevperf.sock 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3696754 ']' 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.701 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.959 [2024-12-10 12:26:23.528377] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:16.959 [2024-12-10 12:26:23.528465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3696754 ] 00:23:16.959 [2024-12-10 12:26:23.635572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.959 [2024-12-10 12:26:23.747214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.525 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.525 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.525 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:23:17.783 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.041 [2024-12-10 12:26:24.670503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.041 TLSTESTn1 00:23:18.041 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:18.041 Running I/O for 10 seconds... 00:23:20.347 4057.00 IOPS, 15.85 MiB/s [2024-12-10T11:26:28.107Z] 4218.00 IOPS, 16.48 MiB/s [2024-12-10T11:26:29.041Z] 4264.33 IOPS, 16.66 MiB/s [2024-12-10T11:26:29.974Z] 4292.50 IOPS, 16.77 MiB/s [2024-12-10T11:26:30.908Z] 4295.60 IOPS, 16.78 MiB/s [2024-12-10T11:26:32.282Z] 4292.00 IOPS, 16.77 MiB/s [2024-12-10T11:26:33.215Z] 4308.00 IOPS, 16.83 MiB/s [2024-12-10T11:26:34.149Z] 4324.38 IOPS, 16.89 MiB/s [2024-12-10T11:26:35.083Z] 4315.78 IOPS, 16.86 MiB/s [2024-12-10T11:26:35.083Z] 4330.10 IOPS, 16.91 MiB/s 00:23:28.257 Latency(us) 00:23:28.257 [2024-12-10T11:26:35.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.257 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.257 Verification LBA range: start 0x0 length 0x2000 00:23:28.257 TLSTESTn1 : 10.02 4334.09 16.93 0.00 0.00 29487.37 5648.58 48683.89 00:23:28.257 [2024-12-10T11:26:35.084Z] =================================================================================================================== 00:23:28.258 [2024-12-10T11:26:35.084Z] Total : 4334.09 16.93 0.00 0.00 29487.37 5648.58 48683.89 00:23:28.258 { 00:23:28.258 "results": [ 00:23:28.258 { 00:23:28.258 "job": "TLSTESTn1", 00:23:28.258 "core_mask": "0x4", 00:23:28.258 "workload": "verify", 00:23:28.258 "status": "finished", 00:23:28.258 "verify_range": { 00:23:28.258 "start": 0, 00:23:28.258 "length": 8192 00:23:28.258 }, 00:23:28.258 "queue_depth": 128, 00:23:28.258 "io_size": 4096, 00:23:28.258 "runtime": 10.020102, 00:23:28.258 "iops": 4334.087617072161, 00:23:28.258 "mibps": 16.93002975418813, 00:23:28.258 "io_failed": 0, 00:23:28.258 "io_timeout": 0, 00:23:28.258 "avg_latency_us": 29487.37486054641, 00:23:28.258 "min_latency_us": 5648.579047619048, 00:23:28.258 "max_latency_us": 48683.885714285716 00:23:28.258 } 00:23:28.258 ], 00:23:28.258 "core_count": 1 00:23:28.258 } 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3696754 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3696754 ']' 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3696754 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3696754 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3696754' 00:23:28.258 killing process with pid 3696754 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3696754 00:23:28.258 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.258 00:23:28.258 Latency(us) 00:23:28.258 [2024-12-10T11:26:35.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.258 [2024-12-10T11:26:35.084Z] =================================================================================================================== 00:23:28.258 [2024-12-10T11:26:35.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.258 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3696754 00:23:29.191 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.j8KxG0QlrY 00:23:29.191 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j8KxG0QlrY 00:23:29.191 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:29.191 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j8KxG0QlrY 00:23:29.191 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:29.191 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.191 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j8KxG0QlrY 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.j8KxG0QlrY 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3698749 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3698749 /var/tmp/bdevperf.sock 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3698749 ']' 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.192 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.192 [2024-12-10 12:26:35.967684] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:29.192 [2024-12-10 12:26:35.967776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3698749 ] 00:23:29.450 [2024-12-10 12:26:36.075153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.450 [2024-12-10 12:26:36.185396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.015 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.015 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.015 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:23:30.272 [2024-12-10 12:26:36.951065] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.j8KxG0QlrY': 0100666 00:23:30.272 [2024-12-10 12:26:36.951098] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:30.272 request: 00:23:30.272 { 00:23:30.272 "name": "key0", 00:23:30.272 "path": "/tmp/tmp.j8KxG0QlrY", 00:23:30.272 "method": "keyring_file_add_key", 00:23:30.272 "req_id": 1 00:23:30.272 } 00:23:30.272 Got JSON-RPC error response 00:23:30.272 response: 00:23:30.272 { 00:23:30.272 "code": -1, 00:23:30.272 "message": "Operation not permitted" 00:23:30.272 } 00:23:30.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.530 [2024-12-10 12:26:37.147687] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.530 [2024-12-10 12:26:37.147734] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:30.530 request: 00:23:30.530 { 00:23:30.530 "name": "TLSTEST", 00:23:30.530 "trtype": "tcp", 00:23:30.530 "traddr": "10.0.0.2", 00:23:30.530 "adrfam": "ipv4", 00:23:30.530 "trsvcid": "4420", 00:23:30.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.530 "prchk_reftag": false, 00:23:30.530 "prchk_guard": false, 00:23:30.530 "hdgst": false, 00:23:30.530 "ddgst": false, 00:23:30.530 "psk": "key0", 00:23:30.530 "allow_unrecognized_csi": false, 00:23:30.530 "method": "bdev_nvme_attach_controller", 00:23:30.530 "req_id": 1 00:23:30.530 } 00:23:30.530 Got JSON-RPC error response 00:23:30.530 response: 00:23:30.530 { 00:23:30.530 "code": -126, 00:23:30.530 "message": "Required key not available" 00:23:30.530 } 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3698749 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3698749 ']' 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3698749 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3698749 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3698749' 00:23:30.530 killing process with pid 3698749 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3698749 00:23:30.530 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.530 00:23:30.530 Latency(us) 00:23:30.530 [2024-12-10T11:26:37.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.530 [2024-12-10T11:26:37.356Z] =================================================================================================================== 00:23:30.530 [2024-12-10T11:26:37.356Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.530 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3698749 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3696484 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3696484 ']' 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3696484 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3696484 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3696484' 00:23:31.462 killing process with pid 3696484 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3696484 00:23:31.462 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3696484 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3699419 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3699419 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3699419 ']' 00:23:32.834 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.835 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.835 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.835 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.835 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.835 [2024-12-10 12:26:39.445132] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:32.835 [2024-12-10 12:26:39.445256] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.835 [2024-12-10 12:26:39.558943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.092 [2024-12-10 12:26:39.663566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.092 [2024-12-10 12:26:39.663604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.092 [2024-12-10 12:26:39.663614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.092 [2024-12-10 12:26:39.663625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.092 [2024-12-10 12:26:39.663633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.092 [2024-12-10 12:26:39.664914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.656 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.656 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.656 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.656 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.656 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.656 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.656 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.j8KxG0QlrY 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.j8KxG0QlrY 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.j8KxG0QlrY 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.j8KxG0QlrY 00:23:33.657 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:33.657 [2024-12-10 12:26:40.478059] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.914 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:33.914 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.170 [2024-12-10 12:26:40.871083] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.170 [2024-12-10 12:26:40.871336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.170 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:34.427 malloc0 00:23:34.428 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:34.685 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:23:34.685 [2024-12-10 12:26:41.455344] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.j8KxG0QlrY': 0100666 00:23:34.685 [2024-12-10 12:26:41.455377] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:34.685 request: 00:23:34.685 { 00:23:34.685 "name": "key0", 00:23:34.685 "path": "/tmp/tmp.j8KxG0QlrY", 00:23:34.685 "method": "keyring_file_add_key", 00:23:34.685 "req_id": 1 00:23:34.685 } 00:23:34.685 Got JSON-RPC error response 00:23:34.685 response: 00:23:34.685 { 00:23:34.685 "code": -1, 00:23:34.685 "message": "Operation not permitted" 00:23:34.685 } 00:23:34.685 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.943 [2024-12-10 12:26:41.631835] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:34.943 [2024-12-10 12:26:41.631876] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:34.943 request: 00:23:34.943 { 00:23:34.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.943 "host": "nqn.2016-06.io.spdk:host1", 00:23:34.943 "psk": "key0", 00:23:34.943 "method": "nvmf_subsystem_add_host", 00:23:34.943 "req_id": 1 00:23:34.943 } 00:23:34.943 Got JSON-RPC error response 00:23:34.943 response: 00:23:34.943 { 00:23:34.943 "code": -32603, 00:23:34.943 "message": "Internal error" 00:23:34.943 } 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3699419 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3699419 ']' 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3699419 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699419 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699419' 00:23:34.943 killing process with pid 3699419 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3699419 00:23:34.943 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3699419 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.j8KxG0QlrY 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3699914 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3699914 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3699914 ']' 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.316 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.316 [2024-12-10 12:26:42.942642] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:36.316 [2024-12-10 12:26:42.942737] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.316 [2024-12-10 12:26:43.058138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.573 [2024-12-10 12:26:43.158317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.573 [2024-12-10 12:26:43.158358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.573 [2024-12-10 12:26:43.158368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.573 [2024-12-10 12:26:43.158377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.573 [2024-12-10 12:26:43.158385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.573 [2024-12-10 12:26:43.159728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.j8KxG0QlrY 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.j8KxG0QlrY 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.138 [2024-12-10 12:26:43.945880] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.138 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.396 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.653 [2024-12-10 12:26:44.302817] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.653 [2024-12-10 12:26:44.303064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.653 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.910 malloc0 00:23:37.910 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.910 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:23:38.168 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3700377 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3700377 /var/tmp/bdevperf.sock 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3700377 ']' 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.426 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.426 [2024-12-10 12:26:45.125049] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:38.426 [2024-12-10 12:26:45.125136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700377 ] 00:23:38.426 [2024-12-10 12:26:45.230066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.683 [2024-12-10 12:26:45.336522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.247 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.247 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.247 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:23:39.504 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.504 [2024-12-10 12:26:46.275471] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.761 TLSTESTn1 00:23:39.761 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:40.019 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:40.019 "subsystems": [ 00:23:40.019 { 00:23:40.019 "subsystem": "keyring", 00:23:40.019 "config": [ 00:23:40.019 { 00:23:40.019 "method": "keyring_file_add_key", 00:23:40.019 "params": { 00:23:40.019 "name": "key0", 00:23:40.019 "path": "/tmp/tmp.j8KxG0QlrY" 00:23:40.019 } 00:23:40.019 } 00:23:40.019 ] 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "subsystem": "iobuf", 00:23:40.019 "config": [ 00:23:40.019 { 00:23:40.019 "method": "iobuf_set_options", 00:23:40.019 "params": { 00:23:40.019 "small_pool_count": 8192, 00:23:40.019 "large_pool_count": 1024, 00:23:40.019 "small_bufsize": 8192, 00:23:40.019 "large_bufsize": 135168, 00:23:40.019 "enable_numa": false 00:23:40.019 } 00:23:40.019 } 00:23:40.019 ] 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "subsystem": "sock", 00:23:40.019 "config": [ 00:23:40.019 { 00:23:40.019 "method": "sock_set_default_impl", 00:23:40.019 "params": { 00:23:40.019 "impl_name": "posix" 00:23:40.019 } 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "method": "sock_impl_set_options", 00:23:40.019 "params": { 00:23:40.019 "impl_name": "ssl", 00:23:40.019 "recv_buf_size": 4096, 00:23:40.019 "send_buf_size": 4096, 00:23:40.019 "enable_recv_pipe": true, 00:23:40.019 "enable_quickack": false, 00:23:40.019 "enable_placement_id": 0, 00:23:40.019 "enable_zerocopy_send_server": true, 00:23:40.019 "enable_zerocopy_send_client": false, 00:23:40.019 "zerocopy_threshold": 0, 00:23:40.019 "tls_version": 0, 00:23:40.019 "enable_ktls": false 00:23:40.019 } 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "method": "sock_impl_set_options", 00:23:40.019 "params": { 00:23:40.019 "impl_name": "posix", 00:23:40.019 "recv_buf_size": 2097152, 00:23:40.019 "send_buf_size": 2097152, 00:23:40.019 "enable_recv_pipe": true, 00:23:40.019 "enable_quickack": false, 00:23:40.019 "enable_placement_id": 0, 00:23:40.019 "enable_zerocopy_send_server": true, 00:23:40.019 "enable_zerocopy_send_client": false, 00:23:40.019 "zerocopy_threshold": 0, 00:23:40.019 "tls_version": 0, 00:23:40.019 "enable_ktls": false 00:23:40.019 } 00:23:40.019 } 00:23:40.019 ] 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "subsystem": "vmd", 00:23:40.019 "config": [] 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "subsystem": "accel", 00:23:40.019 "config": [ 00:23:40.019 { 00:23:40.019 "method": "accel_set_options", 00:23:40.019 "params": { 00:23:40.019 "small_cache_size": 128, 00:23:40.019 "large_cache_size": 16, 00:23:40.019 "task_count": 2048, 00:23:40.019 "sequence_count": 2048, 00:23:40.019 "buf_count": 2048 00:23:40.019 } 00:23:40.019 } 00:23:40.019 ] 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "subsystem": "bdev", 00:23:40.019 "config": [ 00:23:40.019 { 00:23:40.019 "method": "bdev_set_options", 00:23:40.019 "params": { 00:23:40.019 "bdev_io_pool_size": 65535, 00:23:40.019 "bdev_io_cache_size": 256, 00:23:40.019 "bdev_auto_examine": true, 00:23:40.019 "iobuf_small_cache_size": 128, 00:23:40.019 "iobuf_large_cache_size": 16 00:23:40.019 } 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "method": "bdev_raid_set_options", 00:23:40.019 "params": { 00:23:40.019 "process_window_size_kb": 1024, 00:23:40.019 "process_max_bandwidth_mb_sec": 0 00:23:40.019 } 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "method": "bdev_iscsi_set_options", 00:23:40.019 "params": { 00:23:40.019 "timeout_sec": 30 00:23:40.019 } 00:23:40.019 }, 00:23:40.019 { 00:23:40.019 "method": "bdev_nvme_set_options", 00:23:40.019 "params": { 00:23:40.019 "action_on_timeout": "none", 00:23:40.019 "timeout_us": 0, 00:23:40.019 "timeout_admin_us": 0, 00:23:40.019 "keep_alive_timeout_ms": 10000, 00:23:40.019 "arbitration_burst": 0, 00:23:40.020 "low_priority_weight": 0, 00:23:40.020 "medium_priority_weight": 0, 00:23:40.020 "high_priority_weight": 0, 00:23:40.020 "nvme_adminq_poll_period_us": 10000, 00:23:40.020 "nvme_ioq_poll_period_us": 0, 00:23:40.020 "io_queue_requests": 0, 00:23:40.020 "delay_cmd_submit": true, 00:23:40.020 "transport_retry_count": 4, 00:23:40.020 "bdev_retry_count": 3, 00:23:40.020 "transport_ack_timeout": 0, 00:23:40.020 "ctrlr_loss_timeout_sec": 0, 00:23:40.020 "reconnect_delay_sec": 0, 00:23:40.020 "fast_io_fail_timeout_sec": 0, 00:23:40.020 "disable_auto_failback": false, 00:23:40.020 "generate_uuids": false, 00:23:40.020 "transport_tos": 0, 00:23:40.020 "nvme_error_stat": false, 00:23:40.020 "rdma_srq_size": 0, 00:23:40.020 "io_path_stat": false, 00:23:40.020 "allow_accel_sequence": false, 00:23:40.020 "rdma_max_cq_size": 0, 00:23:40.020 "rdma_cm_event_timeout_ms": 0, 00:23:40.020 "dhchap_digests": [ 00:23:40.020 "sha256", 00:23:40.020 "sha384", 00:23:40.020 "sha512" 00:23:40.020 ], 00:23:40.020 "dhchap_dhgroups": [ 00:23:40.020 "null", 00:23:40.020 "ffdhe2048", 00:23:40.020 "ffdhe3072", 00:23:40.020 "ffdhe4096", 00:23:40.020 "ffdhe6144", 00:23:40.020 "ffdhe8192" 00:23:40.020 ] 00:23:40.020 } 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "method": "bdev_nvme_set_hotplug", 00:23:40.020 "params": { 00:23:40.020 "period_us": 100000, 00:23:40.020 "enable": false 00:23:40.020 } 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "method": "bdev_malloc_create", 00:23:40.020 "params": { 00:23:40.020 "name": "malloc0", 00:23:40.020 "num_blocks": 8192, 00:23:40.020 "block_size": 4096, 00:23:40.020 "physical_block_size": 4096, 00:23:40.020 "uuid": "1ce8ae20-2828-466d-84c4-8aa3d51cbaa6", 00:23:40.020 "optimal_io_boundary": 0, 00:23:40.020 "md_size": 0, 00:23:40.020 "dif_type": 0, 00:23:40.020 "dif_is_head_of_md": false, 00:23:40.020 "dif_pi_format": 0 00:23:40.020 } 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "method": "bdev_wait_for_examine" 00:23:40.020 } 00:23:40.020 ] 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "subsystem": "nbd", 00:23:40.020 "config": [] 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "subsystem": "scheduler", 00:23:40.020 "config": [ 00:23:40.020 { 00:23:40.020 "method": "framework_set_scheduler", 00:23:40.020 "params": { 00:23:40.020 "name": "static" 00:23:40.020 } 00:23:40.020 } 00:23:40.020 ] 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "subsystem": "nvmf", 00:23:40.020 "config": [ 00:23:40.020 { 00:23:40.020 "method": "nvmf_set_config", 00:23:40.020 "params": { 00:23:40.020 "discovery_filter": "match_any", 00:23:40.020 "admin_cmd_passthru": { 00:23:40.020 "identify_ctrlr": false 00:23:40.020 }, 00:23:40.020 "dhchap_digests": [ 00:23:40.020 "sha256", 00:23:40.020 "sha384", 00:23:40.020 "sha512" 00:23:40.020 ], 00:23:40.020 "dhchap_dhgroups": [ 00:23:40.020 "null", 00:23:40.020 "ffdhe2048", 00:23:40.020 "ffdhe3072", 00:23:40.020 "ffdhe4096", 00:23:40.020 "ffdhe6144", 00:23:40.020 "ffdhe8192" 00:23:40.020 ] 00:23:40.020 } 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "method": "nvmf_set_max_subsystems", 00:23:40.020 "params": { 00:23:40.020 "max_subsystems": 1024 00:23:40.020 } 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "method": "nvmf_set_crdt", 00:23:40.020 "params": { 00:23:40.020 "crdt1": 0, 00:23:40.020 "crdt2": 0, 00:23:40.020 "crdt3": 0 00:23:40.020 } 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "method": "nvmf_create_transport", 00:23:40.020 "params": { 00:23:40.020 "trtype": "TCP", 00:23:40.020 "max_queue_depth": 128, 00:23:40.020 "max_io_qpairs_per_ctrlr": 127, 00:23:40.020 "in_capsule_data_size": 4096, 00:23:40.020 "max_io_size": 131072, 00:23:40.020 "io_unit_size": 131072, 00:23:40.020 "max_aq_depth": 128, 00:23:40.020 "num_shared_buffers": 511, 00:23:40.020 "buf_cache_size": 4294967295, 00:23:40.020 "dif_insert_or_strip": false, 00:23:40.020 "zcopy": false, 00:23:40.020 "c2h_success": false, 00:23:40.020 "sock_priority": 0, 00:23:40.020 "abort_timeout_sec": 1, 00:23:40.020 "ack_timeout": 0, 00:23:40.020 "data_wr_pool_size": 0 00:23:40.020 } 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "method": "nvmf_create_subsystem", 00:23:40.020 "params": { 00:23:40.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.020 "allow_any_host": false, 00:23:40.020 "serial_number": "SPDK00000000000001", 00:23:40.020 "model_number": "SPDK bdev Controller", 00:23:40.020 "max_namespaces": 10, 00:23:40.020 "min_cntlid": 1, 00:23:40.020 "max_cntlid": 65519, 00:23:40.020 "ana_reporting": false 00:23:40.020 } 00:23:40.020 }, 00:23:40.020 { 00:23:40.020 "method": "nvmf_subsystem_add_host", 00:23:40.021 "params": { 00:23:40.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.021 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.021 "psk": "key0" 00:23:40.021 } 00:23:40.021 }, 00:23:40.021 { 00:23:40.021 "method": "nvmf_subsystem_add_ns", 00:23:40.021 "params": { 00:23:40.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.021 "namespace": { 00:23:40.021 "nsid": 1, 00:23:40.021 "bdev_name": "malloc0", 00:23:40.021 "nguid": "1CE8AE202828466D84C48AA3D51CBAA6", 00:23:40.021 "uuid": "1ce8ae20-2828-466d-84c4-8aa3d51cbaa6", 00:23:40.021 "no_auto_visible": false 00:23:40.021 } 00:23:40.021 } 00:23:40.021 }, 00:23:40.021 { 00:23:40.021 "method": "nvmf_subsystem_add_listener", 00:23:40.021 "params": { 00:23:40.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.021 "listen_address": { 00:23:40.021 "trtype": "TCP", 00:23:40.021 "adrfam": "IPv4", 00:23:40.021 "traddr": "10.0.0.2", 00:23:40.021 "trsvcid": "4420" 00:23:40.021 }, 00:23:40.021 "secure_channel": true 00:23:40.021 } 00:23:40.021 } 00:23:40.021 ] 00:23:40.021 } 00:23:40.021 ] 00:23:40.021 }' 00:23:40.021 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:40.281 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:40.281 "subsystems": [ 00:23:40.281 { 00:23:40.281 "subsystem": "keyring", 00:23:40.281 "config": [ 00:23:40.281 { 00:23:40.281 "method": "keyring_file_add_key", 00:23:40.281 "params": { 00:23:40.281 "name": "key0", 00:23:40.281 "path": "/tmp/tmp.j8KxG0QlrY" 00:23:40.281 } 00:23:40.281 } 00:23:40.281 ] 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "subsystem": "iobuf", 00:23:40.281 "config": [ 00:23:40.281 { 00:23:40.281 "method": "iobuf_set_options", 00:23:40.281 "params": { 00:23:40.281 "small_pool_count": 8192, 00:23:40.281 "large_pool_count": 1024, 00:23:40.281 "small_bufsize": 8192, 00:23:40.281 "large_bufsize": 135168, 00:23:40.281 "enable_numa": false 00:23:40.281 } 00:23:40.281 } 00:23:40.281 ] 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "subsystem": "sock", 00:23:40.281 "config": [ 00:23:40.281 { 00:23:40.281 "method": "sock_set_default_impl", 00:23:40.281 "params": { 00:23:40.281 "impl_name": "posix" 00:23:40.281 } 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "method": "sock_impl_set_options", 00:23:40.281 "params": { 00:23:40.281 "impl_name": "ssl", 00:23:40.281 "recv_buf_size": 4096, 00:23:40.281 "send_buf_size": 4096, 00:23:40.281 "enable_recv_pipe": true, 00:23:40.281 "enable_quickack": false, 00:23:40.281 "enable_placement_id": 0, 00:23:40.281 "enable_zerocopy_send_server": true, 00:23:40.281 "enable_zerocopy_send_client": false, 00:23:40.281 "zerocopy_threshold": 0, 00:23:40.281 "tls_version": 0, 00:23:40.281 "enable_ktls": false 00:23:40.281 } 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "method": "sock_impl_set_options", 00:23:40.281 "params": { 00:23:40.281 "impl_name": "posix", 00:23:40.281 "recv_buf_size": 2097152, 00:23:40.281 "send_buf_size": 2097152, 00:23:40.281 "enable_recv_pipe": true, 00:23:40.281 "enable_quickack": false, 00:23:40.281 "enable_placement_id": 0, 00:23:40.281 "enable_zerocopy_send_server": true, 00:23:40.281 "enable_zerocopy_send_client": false, 00:23:40.281 "zerocopy_threshold": 0, 00:23:40.281 "tls_version": 0, 00:23:40.281 "enable_ktls": false 00:23:40.281 } 00:23:40.281 } 00:23:40.281 ] 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "subsystem": "vmd", 00:23:40.281 "config": [] 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "subsystem": "accel", 00:23:40.281 "config": [ 00:23:40.281 { 00:23:40.281 "method": "accel_set_options", 00:23:40.281 "params": { 00:23:40.281 "small_cache_size": 128, 00:23:40.281 "large_cache_size": 16, 00:23:40.281 "task_count": 2048, 00:23:40.281 "sequence_count": 2048, 00:23:40.281 "buf_count": 2048 00:23:40.281 } 00:23:40.281 } 00:23:40.281 ] 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "subsystem": "bdev", 00:23:40.281 "config": [ 00:23:40.281 { 00:23:40.281 "method": "bdev_set_options", 00:23:40.281 "params": { 00:23:40.281 "bdev_io_pool_size": 65535, 00:23:40.281 "bdev_io_cache_size": 256, 00:23:40.281 "bdev_auto_examine": true, 00:23:40.281 "iobuf_small_cache_size": 128, 00:23:40.281 "iobuf_large_cache_size": 16 00:23:40.281 } 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "method": "bdev_raid_set_options", 00:23:40.281 "params": { 00:23:40.281 "process_window_size_kb": 1024, 00:23:40.281 "process_max_bandwidth_mb_sec": 0 00:23:40.281 } 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "method": "bdev_iscsi_set_options", 00:23:40.281 "params": { 00:23:40.281 "timeout_sec": 30 00:23:40.281 } 00:23:40.281 }, 00:23:40.281 { 00:23:40.281 "method": "bdev_nvme_set_options", 00:23:40.281 "params": { 00:23:40.281 "action_on_timeout": "none", 00:23:40.281 "timeout_us": 0, 00:23:40.281 "timeout_admin_us": 0, 00:23:40.281 "keep_alive_timeout_ms": 10000, 00:23:40.281 "arbitration_burst": 0, 00:23:40.281 "low_priority_weight": 0, 00:23:40.281 "medium_priority_weight": 0, 00:23:40.281 "high_priority_weight": 0, 00:23:40.281 "nvme_adminq_poll_period_us": 10000, 00:23:40.282 "nvme_ioq_poll_period_us": 0, 00:23:40.282 "io_queue_requests": 512, 00:23:40.282 "delay_cmd_submit": true, 00:23:40.282 "transport_retry_count": 4, 00:23:40.282 "bdev_retry_count": 3, 00:23:40.282 "transport_ack_timeout": 0, 00:23:40.282 "ctrlr_loss_timeout_sec": 0, 00:23:40.282 "reconnect_delay_sec": 0, 00:23:40.282 "fast_io_fail_timeout_sec": 0, 00:23:40.282 "disable_auto_failback": false, 00:23:40.282 "generate_uuids": false, 00:23:40.282 "transport_tos": 0, 00:23:40.282 "nvme_error_stat": false, 00:23:40.282 "rdma_srq_size": 0, 00:23:40.282 "io_path_stat": false, 00:23:40.282 "allow_accel_sequence": false, 00:23:40.282 "rdma_max_cq_size": 0, 00:23:40.282 "rdma_cm_event_timeout_ms": 0, 00:23:40.282 "dhchap_digests": [ 00:23:40.282 "sha256", 00:23:40.282 "sha384", 00:23:40.282 "sha512" 00:23:40.282 ], 00:23:40.282 "dhchap_dhgroups": [ 00:23:40.282 "null", 00:23:40.282 "ffdhe2048", 00:23:40.282 "ffdhe3072", 00:23:40.282 "ffdhe4096", 00:23:40.282 "ffdhe6144", 00:23:40.282 "ffdhe8192" 00:23:40.282 ] 00:23:40.282 } 00:23:40.282 }, 00:23:40.282 { 00:23:40.282 "method": "bdev_nvme_attach_controller", 00:23:40.282 "params": { 00:23:40.282 "name": "TLSTEST", 00:23:40.282 "trtype": "TCP", 00:23:40.282 "adrfam": "IPv4", 00:23:40.282 "traddr": "10.0.0.2", 00:23:40.282 "trsvcid": "4420", 00:23:40.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.282 "prchk_reftag": false, 00:23:40.282 "prchk_guard": false, 00:23:40.282 "ctrlr_loss_timeout_sec": 0, 00:23:40.282 "reconnect_delay_sec": 0, 00:23:40.282 "fast_io_fail_timeout_sec": 0, 00:23:40.282 "psk": "key0", 00:23:40.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.282 "hdgst": false, 00:23:40.282 "ddgst": false, 00:23:40.282 "multipath": "multipath" 00:23:40.282 } 00:23:40.282 }, 00:23:40.282 { 00:23:40.282 "method": "bdev_nvme_set_hotplug", 00:23:40.282 "params": { 00:23:40.282 "period_us": 100000, 00:23:40.282 "enable": false 00:23:40.282 } 00:23:40.282 }, 00:23:40.282 { 00:23:40.282 "method": "bdev_wait_for_examine" 00:23:40.282 } 00:23:40.282 ] 00:23:40.282 }, 00:23:40.282 { 00:23:40.282 "subsystem": "nbd", 00:23:40.282 "config": [] 00:23:40.282 } 00:23:40.282 ] 00:23:40.282 }' 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3700377 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3700377 ']' 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3700377 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3700377 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3700377' 00:23:40.282 killing process with pid 3700377 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3700377 00:23:40.282 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.282 00:23:40.282 Latency(us) 00:23:40.282 [2024-12-10T11:26:47.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.282 [2024-12-10T11:26:47.108Z] =================================================================================================================== 00:23:40.282 [2024-12-10T11:26:47.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.282 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3700377 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3699914 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3699914 ']' 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3699914 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699914 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699914' 00:23:41.296 killing process with pid 3699914 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3699914 00:23:41.296 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3699914 00:23:42.240 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:42.240 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.240 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.240 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:42.240 "subsystems": [ 00:23:42.240 { 00:23:42.240 "subsystem": "keyring", 00:23:42.240 "config": [ 00:23:42.240 { 00:23:42.240 "method": "keyring_file_add_key", 00:23:42.240 "params": { 00:23:42.240 "name": "key0", 00:23:42.240 "path": "/tmp/tmp.j8KxG0QlrY" 00:23:42.240 } 00:23:42.240 } 00:23:42.240 ] 00:23:42.240 }, 00:23:42.240 { 00:23:42.240 "subsystem": "iobuf", 00:23:42.240 "config": [ 00:23:42.240 { 00:23:42.240 "method": "iobuf_set_options", 00:23:42.240 "params": { 00:23:42.240 "small_pool_count": 8192, 00:23:42.240 "large_pool_count": 1024, 00:23:42.240 "small_bufsize": 8192, 00:23:42.240 "large_bufsize": 135168, 00:23:42.240 "enable_numa": false 00:23:42.240 } 00:23:42.240 } 00:23:42.240 ] 00:23:42.240 }, 00:23:42.240 { 00:23:42.240 "subsystem": "sock", 00:23:42.240 "config": [ 00:23:42.240 { 00:23:42.240 "method": "sock_set_default_impl", 00:23:42.240 "params": { 00:23:42.240 "impl_name": "posix" 00:23:42.240 } 00:23:42.240 }, 00:23:42.240 { 00:23:42.240 "method": "sock_impl_set_options", 00:23:42.240 "params": { 00:23:42.240 "impl_name": "ssl", 00:23:42.240 "recv_buf_size": 4096, 00:23:42.240 "send_buf_size": 4096, 00:23:42.240 "enable_recv_pipe": true, 00:23:42.240 "enable_quickack": false, 00:23:42.240 "enable_placement_id": 0, 00:23:42.240 "enable_zerocopy_send_server": true, 00:23:42.240 "enable_zerocopy_send_client": false, 00:23:42.240 "zerocopy_threshold": 0, 00:23:42.240 "tls_version": 0, 00:23:42.240 "enable_ktls": false 00:23:42.240 } 00:23:42.240 }, 00:23:42.240 { 00:23:42.240 "method": "sock_impl_set_options", 00:23:42.240 "params": { 00:23:42.240 "impl_name": "posix", 00:23:42.240 "recv_buf_size": 2097152, 00:23:42.240 "send_buf_size": 2097152, 00:23:42.240 "enable_recv_pipe": true, 00:23:42.240 "enable_quickack": false, 00:23:42.240 "enable_placement_id": 0, 00:23:42.241 "enable_zerocopy_send_server": true, 00:23:42.241 "enable_zerocopy_send_client": false, 00:23:42.241 "zerocopy_threshold": 0, 00:23:42.241 "tls_version": 0, 00:23:42.241 "enable_ktls": false 00:23:42.241 } 00:23:42.241 } 00:23:42.241 ] 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "subsystem": "vmd", 00:23:42.241 "config": [] 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "subsystem": "accel", 00:23:42.241 "config": [ 00:23:42.241 { 00:23:42.241 "method": "accel_set_options", 00:23:42.241 "params": { 00:23:42.241 "small_cache_size": 128, 00:23:42.241 "large_cache_size": 16, 00:23:42.241 "task_count": 2048, 00:23:42.241 "sequence_count": 2048, 00:23:42.241 "buf_count": 2048 00:23:42.241 } 00:23:42.241 } 00:23:42.241 ] 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "subsystem": "bdev", 00:23:42.241 "config": [ 00:23:42.241 { 00:23:42.241 "method": "bdev_set_options", 00:23:42.241 "params": { 00:23:42.241 "bdev_io_pool_size": 65535, 00:23:42.241 "bdev_io_cache_size": 256, 00:23:42.241 "bdev_auto_examine": true, 00:23:42.241 "iobuf_small_cache_size": 128, 00:23:42.241 "iobuf_large_cache_size": 16 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "bdev_raid_set_options", 00:23:42.241 "params": { 00:23:42.241 "process_window_size_kb": 1024, 00:23:42.241 "process_max_bandwidth_mb_sec": 0 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "bdev_iscsi_set_options", 00:23:42.241 "params": { 00:23:42.241 "timeout_sec": 30 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "bdev_nvme_set_options", 00:23:42.241 "params": { 00:23:42.241 "action_on_timeout": "none", 00:23:42.241 "timeout_us": 0, 00:23:42.241 "timeout_admin_us": 0, 00:23:42.241 "keep_alive_timeout_ms": 10000, 00:23:42.241 "arbitration_burst": 0, 00:23:42.241 "low_priority_weight": 0, 00:23:42.241 "medium_priority_weight": 0, 00:23:42.241 "high_priority_weight": 0, 00:23:42.241 "nvme_adminq_poll_period_us": 10000, 00:23:42.241 "nvme_ioq_poll_period_us": 0, 00:23:42.241 "io_queue_requests": 0, 00:23:42.241 "delay_cmd_submit": true, 00:23:42.241 "transport_retry_count": 4, 00:23:42.241 "bdev_retry_count": 3, 00:23:42.241 "transport_ack_timeout": 0, 00:23:42.241 "ctrlr_loss_timeout_sec": 0, 00:23:42.241 "reconnect_delay_sec": 0, 00:23:42.241 "fast_io_fail_timeout_sec": 0, 00:23:42.241 "disable_auto_failback": false, 00:23:42.241 "generate_uuids": false, 00:23:42.241 "transport_tos": 0, 00:23:42.241 "nvme_error_stat": false, 00:23:42.241 "rdma_srq_size": 0, 00:23:42.241 "io_path_stat": false, 00:23:42.241 "allow_accel_sequence": false, 00:23:42.241 "rdma_max_cq_size": 0, 00:23:42.241 "rdma_cm_event_timeout_ms": 0, 00:23:42.241 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.241 "dhchap_digests": [ 00:23:42.241 "sha256", 00:23:42.241 "sha384", 00:23:42.241 "sha512" 00:23:42.241 ], 00:23:42.241 "dhchap_dhgroups": [ 00:23:42.241 "null", 00:23:42.241 "ffdhe2048", 00:23:42.241 "ffdhe3072", 00:23:42.241 "ffdhe4096", 00:23:42.241 "ffdhe6144", 00:23:42.241 "ffdhe8192" 00:23:42.241 ] 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "bdev_nvme_set_hotplug", 00:23:42.241 "params": { 00:23:42.241 "period_us": 100000, 00:23:42.241 "enable": false 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "bdev_malloc_create", 00:23:42.241 "params": { 00:23:42.241 "name": "malloc0", 00:23:42.241 "num_blocks": 8192, 00:23:42.241 "block_size": 4096, 00:23:42.241 "physical_block_size": 4096, 00:23:42.241 "uuid": "1ce8ae20-2828-466d-84c4-8aa3d51cbaa6", 00:23:42.241 "optimal_io_boundary": 0, 00:23:42.241 "md_size": 0, 00:23:42.241 "dif_type": 0, 00:23:42.241 "dif_is_head_of_md": false, 00:23:42.241 "dif_pi_format": 0 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "bdev_wait_for_examine" 00:23:42.241 } 00:23:42.241 ] 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "subsystem": "nbd", 00:23:42.241 "config": [] 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "subsystem": "scheduler", 00:23:42.241 "config": [ 00:23:42.241 { 00:23:42.241 "method": "framework_set_scheduler", 00:23:42.241 "params": { 00:23:42.241 "name": "static" 00:23:42.241 } 00:23:42.241 } 00:23:42.241 ] 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "subsystem": "nvmf", 00:23:42.241 "config": [ 00:23:42.241 { 00:23:42.241 "method": "nvmf_set_config", 00:23:42.241 "params": { 00:23:42.241 "discovery_filter": "match_any", 00:23:42.241 "admin_cmd_passthru": { 00:23:42.241 "identify_ctrlr": false 00:23:42.241 }, 00:23:42.241 "dhchap_digests": [ 00:23:42.241 "sha256", 00:23:42.241 "sha384", 00:23:42.241 "sha512" 00:23:42.241 ], 00:23:42.241 "dhchap_dhgroups": [ 00:23:42.241 "null", 00:23:42.241 "ffdhe2048", 00:23:42.241 "ffdhe3072", 00:23:42.241 "ffdhe4096", 00:23:42.241 "ffdhe6144", 00:23:42.241 "ffdhe8192" 00:23:42.241 ] 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "nvmf_set_max_subsystems", 00:23:42.241 "params": { 00:23:42.241 "max_subsystems": 1024 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "nvmf_set_crdt", 00:23:42.241 "params": { 00:23:42.241 "crdt1": 0, 00:23:42.241 "crdt2": 0, 00:23:42.241 "crdt3": 0 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "nvmf_create_transport", 00:23:42.241 "params": { 00:23:42.241 "trtype": "TCP", 00:23:42.241 "max_queue_depth": 128, 00:23:42.241 "max_io_qpairs_per_ctrlr": 127, 00:23:42.241 "in_capsule_data_size": 4096, 00:23:42.241 "max_io_size": 131072, 00:23:42.241 "io_unit_size": 131072, 00:23:42.241 "max_aq_depth": 128, 00:23:42.241 "num_shared_buffers": 511, 00:23:42.241 "buf_cache_size": 4294967295, 00:23:42.241 "dif_insert_or_strip": false, 00:23:42.241 "zcopy": false, 00:23:42.241 "c2h_success": false, 00:23:42.241 "sock_priority": 0, 00:23:42.241 "abort_timeout_sec": 1, 00:23:42.241 "ack_timeout": 0, 00:23:42.241 "data_wr_pool_size": 0 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "nvmf_create_subsystem", 00:23:42.241 "params": { 00:23:42.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.241 "allow_any_host": false, 00:23:42.241 "serial_number": "SPDK00000000000001", 00:23:42.241 "model_number": "SPDK bdev Controller", 00:23:42.241 "max_namespaces": 10, 00:23:42.241 "min_cntlid": 1, 00:23:42.241 "max_cntlid": 65519, 00:23:42.241 "ana_reporting": false 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "nvmf_subsystem_add_host", 00:23:42.241 "params": { 00:23:42.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.241 "host": "nqn.2016-06.io.spdk:host1", 00:23:42.241 "psk": "key0" 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "nvmf_subsystem_add_ns", 00:23:42.241 "params": { 00:23:42.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.241 "namespace": { 00:23:42.241 "nsid": 1, 00:23:42.241 "bdev_name": "malloc0", 00:23:42.241 "nguid": "1CE8AE202828466D84C48AA3D51CBAA6", 00:23:42.241 "uuid": "1ce8ae20-2828-466d-84c4-8aa3d51cbaa6", 00:23:42.241 "no_auto_visible": false 00:23:42.241 } 00:23:42.241 } 00:23:42.241 }, 00:23:42.241 { 00:23:42.241 "method": "nvmf_subsystem_add_listener", 00:23:42.241 "params": { 00:23:42.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.241 "listen_address": { 00:23:42.241 "trtype": "TCP", 00:23:42.241 "adrfam": "IPv4", 00:23:42.241 "traddr": "10.0.0.2", 00:23:42.241 "trsvcid": "4420" 00:23:42.241 }, 00:23:42.241 "secure_channel": true 00:23:42.241 } 00:23:42.241 } 00:23:42.241 ] 00:23:42.241 } 00:23:42.241 ] 00:23:42.241 }' 00:23:42.241 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3701007 00:23:42.242 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3701007 00:23:42.242 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:42.242 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3701007 ']' 00:23:42.242 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.242 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.242 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.242 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.242 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.500 [2024-12-10 12:26:49.112233] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:42.500 [2024-12-10 12:26:49.112327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.500 [2024-12-10 12:26:49.228781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.759 [2024-12-10 12:26:49.331955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.759 [2024-12-10 12:26:49.331997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.759 [2024-12-10 12:26:49.332008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.759 [2024-12-10 12:26:49.332019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.759 [2024-12-10 12:26:49.332027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.759 [2024-12-10 12:26:49.333537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.017 [2024-12-10 12:26:49.832123] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.275 [2024-12-10 12:26:49.864174] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.275 [2024-12-10 12:26:49.864429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3701104 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3701104 /var/tmp/bdevperf.sock 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3701104 ']' 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.275 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:43.275 "subsystems": [ 00:23:43.275 { 00:23:43.275 "subsystem": "keyring", 00:23:43.275 "config": [ 00:23:43.275 { 00:23:43.275 "method": "keyring_file_add_key", 00:23:43.275 "params": { 00:23:43.275 "name": "key0", 00:23:43.275 "path": "/tmp/tmp.j8KxG0QlrY" 00:23:43.275 } 00:23:43.275 } 00:23:43.275 ] 00:23:43.275 }, 00:23:43.275 { 00:23:43.275 "subsystem": "iobuf", 00:23:43.275 "config": [ 00:23:43.275 { 00:23:43.275 "method": "iobuf_set_options", 00:23:43.275 "params": { 00:23:43.275 "small_pool_count": 8192, 00:23:43.275 "large_pool_count": 1024, 00:23:43.275 "small_bufsize": 8192, 00:23:43.275 "large_bufsize": 135168, 00:23:43.275 "enable_numa": false 00:23:43.275 } 00:23:43.275 } 00:23:43.276 ] 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "subsystem": "sock", 00:23:43.276 "config": [ 00:23:43.276 { 00:23:43.276 "method": "sock_set_default_impl", 00:23:43.276 "params": { 00:23:43.276 "impl_name": "posix" 00:23:43.276 } 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "method": "sock_impl_set_options", 00:23:43.276 "params": { 00:23:43.276 "impl_name": "ssl", 00:23:43.276 "recv_buf_size": 4096, 00:23:43.276 "send_buf_size": 4096, 00:23:43.276 "enable_recv_pipe": true, 00:23:43.276 "enable_quickack": false, 00:23:43.276 "enable_placement_id": 0, 00:23:43.276 "enable_zerocopy_send_server": true, 00:23:43.276 "enable_zerocopy_send_client": false, 00:23:43.276 "zerocopy_threshold": 0, 00:23:43.276 "tls_version": 0, 00:23:43.276 "enable_ktls": false 00:23:43.276 } 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "method": "sock_impl_set_options", 00:23:43.276 "params": { 00:23:43.276 "impl_name": "posix", 00:23:43.276 "recv_buf_size": 2097152, 00:23:43.276 "send_buf_size": 2097152, 00:23:43.276 "enable_recv_pipe": true, 00:23:43.276 "enable_quickack": false, 00:23:43.276 "enable_placement_id": 0, 00:23:43.276 "enable_zerocopy_send_server": true, 00:23:43.276 "enable_zerocopy_send_client": false, 00:23:43.276 "zerocopy_threshold": 0, 00:23:43.276 "tls_version": 0, 00:23:43.276 "enable_ktls": false 00:23:43.276 } 00:23:43.276 } 00:23:43.276 ] 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "subsystem": "vmd", 00:23:43.276 "config": [] 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "subsystem": "accel", 00:23:43.276 "config": [ 00:23:43.276 { 00:23:43.276 "method": "accel_set_options", 00:23:43.276 "params": { 00:23:43.276 "small_cache_size": 128, 00:23:43.276 "large_cache_size": 16, 00:23:43.276 "task_count": 2048, 00:23:43.276 "sequence_count": 2048, 00:23:43.276 "buf_count": 2048 00:23:43.276 } 00:23:43.276 } 00:23:43.276 ] 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "subsystem": "bdev", 00:23:43.276 "config": [ 00:23:43.276 { 00:23:43.276 "method": "bdev_set_options", 00:23:43.276 "params": { 00:23:43.276 "bdev_io_pool_size": 65535, 00:23:43.276 "bdev_io_cache_size": 256, 00:23:43.276 "bdev_auto_examine": true, 00:23:43.276 "iobuf_small_cache_size": 128, 00:23:43.276 "iobuf_large_cache_size": 16 00:23:43.276 } 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "method": "bdev_raid_set_options", 00:23:43.276 "params": { 00:23:43.276 "process_window_size_kb": 1024, 00:23:43.276 "process_max_bandwidth_mb_sec": 0 00:23:43.276 } 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "method": "bdev_iscsi_set_options", 00:23:43.276 "params": { 00:23:43.276 "timeout_sec": 30 00:23:43.276 } 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "method": "bdev_nvme_set_options", 00:23:43.276 "params": { 00:23:43.276 "action_on_timeout": "none", 00:23:43.276 "timeout_us": 0, 00:23:43.276 "timeout_admin_us": 0, 00:23:43.276 "keep_alive_timeout_ms": 10000, 00:23:43.276 "arbitration_burst": 0, 00:23:43.276 "low_priority_weight": 0, 00:23:43.276 "medium_priority_weight": 0, 00:23:43.276 "high_priority_weight": 0, 00:23:43.276 "nvme_adminq_poll_period_us": 10000, 00:23:43.276 "nvme_ioq_poll_period_us": 0, 00:23:43.276 "io_queue_requests": 512, 00:23:43.276 "delay_cmd_submit": true, 00:23:43.276 "transport_retry_count": 4, 00:23:43.276 "bdev_retry_count": 3, 00:23:43.276 "transport_ack_timeout": 0, 00:23:43.276 "ctrlr_loss_timeout_sec": 0, 00:23:43.276 "reconnect_delay_sec": 0, 00:23:43.276 "fast_io_fail_timeout_sec": 0, 00:23:43.276 "disable_auto_failback": false, 00:23:43.276 "generate_uuids": false, 00:23:43.276 "transport_tos": 0, 00:23:43.276 "nvme_error_stat": false, 00:23:43.276 "rdma_srq_size": 0, 00:23:43.276 "io_path_stat": false, 00:23:43.276 "allow_accel_sequence": false, 00:23:43.276 "rdma_max_cq_size": 0, 00:23:43.276 "rdma_cm_event_timeout_ms": 0, 00:23:43.276 "dhchap_digests": [ 00:23:43.276 "sha256", 00:23:43.276 "sha384", 00:23:43.276 "sha512" 00:23:43.276 ], 00:23:43.276 "dhchap_dhgroups": [ 00:23:43.276 "null", 00:23:43.276 "ffdhe2048", 00:23:43.276 "ffdhe3072", 00:23:43.276 "ffdhe4096", 00:23:43.276 "ffdhe6144", 00:23:43.276 "ffdhe8192" 00:23:43.276 ] 00:23:43.276 } 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "method": "bdev_nvme_attach_controller", 00:23:43.276 "params": { 00:23:43.276 "name": "TLSTEST", 00:23:43.276 "trtype": "TCP", 00:23:43.276 "adrfam": "IPv4", 00:23:43.276 "traddr": "10.0.0.2", 00:23:43.276 "trsvcid": "4420", 00:23:43.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.276 "prchk_reftag": false, 00:23:43.276 "prchk_guard": false, 00:23:43.276 "ctrlr_loss_timeout_sec": 0, 00:23:43.276 "reconnect_delay_sec": 0, 00:23:43.276 "fast_io_fail_timeout_sec": 0, 00:23:43.276 "psk": "key0", 00:23:43.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.276 "hdgst": false, 00:23:43.276 "ddgst": false, 00:23:43.276 "multipath": "multipath" 00:23:43.276 } 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "method": "bdev_nvme_set_hotplug", 00:23:43.276 "params": { 00:23:43.276 "period_us": 100000, 00:23:43.276 "enable": false 00:23:43.276 } 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "method": "bdev_wait_for_examine" 00:23:43.276 } 00:23:43.276 ] 00:23:43.276 }, 00:23:43.276 { 00:23:43.276 "subsystem": "nbd", 00:23:43.276 "config": [] 00:23:43.276 } 00:23:43.276 ] 00:23:43.276 }' 00:23:43.276 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.276 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.276 [2024-12-10 12:26:50.011791] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:43.276 [2024-12-10 12:26:50.011879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701104 ] 00:23:43.534 [2024-12-10 12:26:50.121948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.534 [2024-12-10 12:26:50.238502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.099 [2024-12-10 12:26:50.662334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.099 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.099 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.099 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:44.099 Running I/O for 10 seconds... 00:23:46.408 4401.00 IOPS, 17.19 MiB/s [2024-12-10T11:26:54.168Z] 4426.00 IOPS, 17.29 MiB/s [2024-12-10T11:26:55.102Z] 4476.67 IOPS, 17.49 MiB/s [2024-12-10T11:26:56.035Z] 4518.75 IOPS, 17.65 MiB/s [2024-12-10T11:26:56.969Z] 4539.20 IOPS, 17.73 MiB/s [2024-12-10T11:26:58.345Z] 4532.67 IOPS, 17.71 MiB/s [2024-12-10T11:26:59.279Z] 4525.43 IOPS, 17.68 MiB/s [2024-12-10T11:27:00.214Z] 4539.62 IOPS, 17.73 MiB/s [2024-12-10T11:27:01.144Z] 4536.44 IOPS, 17.72 MiB/s [2024-12-10T11:27:01.144Z] 4541.60 IOPS, 17.74 MiB/s 00:23:54.318 Latency(us) 00:23:54.318 [2024-12-10T11:27:01.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.318 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.318 Verification LBA range: start 0x0 length 0x2000 00:23:54.318 TLSTESTn1 : 10.02 4546.10 17.76 0.00 0.00 28112.04 6834.47 26089.57 00:23:54.318 [2024-12-10T11:27:01.144Z] =================================================================================================================== 00:23:54.318 [2024-12-10T11:27:01.144Z] Total : 4546.10 17.76 0.00 0.00 28112.04 6834.47 26089.57 00:23:54.318 { 00:23:54.318 "results": [ 00:23:54.318 { 00:23:54.318 "job": "TLSTESTn1", 00:23:54.318 "core_mask": "0x4", 00:23:54.318 "workload": "verify", 00:23:54.318 "status": "finished", 00:23:54.318 "verify_range": { 00:23:54.318 "start": 0, 00:23:54.318 "length": 8192 00:23:54.318 }, 00:23:54.318 "queue_depth": 128, 00:23:54.318 "io_size": 4096, 00:23:54.318 "runtime": 10.017821, 00:23:54.318 "iops": 4546.0983980448445, 00:23:54.318 "mibps": 17.758196867362674, 00:23:54.318 "io_failed": 0, 00:23:54.318 "io_timeout": 0, 00:23:54.318 "avg_latency_us": 28112.037652569787, 00:23:54.318 "min_latency_us": 6834.4685714285715, 00:23:54.318 "max_latency_us": 26089.569523809525 00:23:54.318 } 00:23:54.318 ], 00:23:54.318 "core_count": 1 00:23:54.318 } 00:23:54.318 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.318 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3701104 00:23:54.318 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3701104 ']' 00:23:54.318 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3701104 00:23:54.318 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:54.318 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.318 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701104 00:23:54.318 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:54.318 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:54.318 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701104' 00:23:54.318 killing process with pid 3701104 00:23:54.318 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3701104 00:23:54.318 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.318 00:23:54.318 Latency(us) 00:23:54.318 [2024-12-10T11:27:01.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.318 [2024-12-10T11:27:01.144Z] =================================================================================================================== 00:23:54.318 [2024-12-10T11:27:01.145Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.319 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3701104 00:23:55.251 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3701007 00:23:55.251 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3701007 ']' 00:23:55.251 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3701007 00:23:55.251 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:55.251 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.251 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701007 00:23:55.251 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:55.251 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:55.251 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701007' 00:23:55.251 killing process with pid 3701007 00:23:55.251 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3701007 00:23:55.251 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3701007 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3703454 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3703454 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3703454 ']' 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.623 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.623 [2024-12-10 12:27:03.292218] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:56.623 [2024-12-10 12:27:03.292308] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.623 [2024-12-10 12:27:03.405778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.881 [2024-12-10 12:27:03.507897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.881 [2024-12-10 12:27:03.507943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.881 [2024-12-10 12:27:03.507953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.881 [2024-12-10 12:27:03.507964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.881 [2024-12-10 12:27:03.507972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.881 [2024-12-10 12:27:03.509454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.j8KxG0QlrY 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.j8KxG0QlrY 00:23:57.446 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:57.703 [2024-12-10 12:27:04.302917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.703 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:57.703 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:57.961 [2024-12-10 12:27:04.667882] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.961 [2024-12-10 12:27:04.668128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.961 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:58.218 malloc0 00:23:58.218 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:58.479 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:23:58.479 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3703713 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3703713 /var/tmp/bdevperf.sock 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3703713 ']' 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.738 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.738 [2024-12-10 12:27:05.500133] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:58.738 [2024-12-10 12:27:05.500235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703713 ] 00:23:58.995 [2024-12-10 12:27:05.605592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.995 [2024-12-10 12:27:05.717719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.559 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.559 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:59.560 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:23:59.817 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:00.075 [2024-12-10 12:27:06.661046] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.075 nvme0n1 00:24:00.075 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.075 Running I/O for 1 seconds... 00:24:01.448 4430.00 IOPS, 17.30 MiB/s 00:24:01.448 Latency(us) 00:24:01.448 [2024-12-10T11:27:08.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.448 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:01.448 Verification LBA range: start 0x0 length 0x2000 00:24:01.448 nvme0n1 : 1.02 4480.31 17.50 0.00 0.00 28315.17 7708.28 25964.74 00:24:01.448 [2024-12-10T11:27:08.274Z] =================================================================================================================== 00:24:01.448 [2024-12-10T11:27:08.274Z] Total : 4480.31 17.50 0.00 0.00 28315.17 7708.28 25964.74 00:24:01.448 { 00:24:01.448 "results": [ 00:24:01.448 { 00:24:01.448 "job": "nvme0n1", 00:24:01.448 "core_mask": "0x2", 00:24:01.448 "workload": "verify", 00:24:01.448 "status": "finished", 00:24:01.448 "verify_range": { 00:24:01.448 "start": 0, 00:24:01.448 "length": 8192 00:24:01.448 }, 00:24:01.448 "queue_depth": 128, 00:24:01.448 "io_size": 4096, 00:24:01.448 "runtime": 1.017564, 00:24:01.448 "iops": 4480.307872526937, 00:24:01.448 "mibps": 17.501202627058348, 00:24:01.448 "io_failed": 0, 00:24:01.448 "io_timeout": 0, 00:24:01.448 "avg_latency_us": 28315.167675450968, 00:24:01.448 "min_latency_us": 7708.281904761905, 00:24:01.448 "max_latency_us": 25964.73904761905 00:24:01.448 } 00:24:01.448 ], 00:24:01.448 "core_count": 1 00:24:01.448 } 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3703713 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3703713 ']' 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3703713 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3703713 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3703713' 00:24:01.448 killing process with pid 3703713 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3703713 00:24:01.448 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.448 00:24:01.448 Latency(us) 00:24:01.448 [2024-12-10T11:27:08.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.448 [2024-12-10T11:27:08.274Z] =================================================================================================================== 00:24:01.448 [2024-12-10T11:27:08.274Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.448 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3703713 00:24:02.015 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3703454 00:24:02.015 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3703454 ']' 00:24:02.015 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3703454 00:24:02.015 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:02.015 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.015 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3703454 00:24:02.274 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.274 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.274 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3703454' 00:24:02.274 killing process with pid 3703454 00:24:02.274 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3703454 00:24:02.274 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3703454 00:24:03.208 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:03.208 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.208 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.208 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3704901 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3704901 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3704901 ']' 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.465 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.465 [2024-12-10 12:27:10.118656] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:03.465 [2024-12-10 12:27:10.118749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.465 [2024-12-10 12:27:10.236048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.723 [2024-12-10 12:27:10.344137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.723 [2024-12-10 12:27:10.344185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.723 [2024-12-10 12:27:10.344195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.723 [2024-12-10 12:27:10.344206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.723 [2024-12-10 12:27:10.344214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.723 [2024-12-10 12:27:10.345631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.288 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.288 [2024-12-10 12:27:10.956971] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.288 malloc0 00:24:04.288 [2024-12-10 12:27:11.010687] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.288 [2024-12-10 12:27:11.010939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3705030 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3705030 /var/tmp/bdevperf.sock 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3705030 ']' 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.288 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.288 [2024-12-10 12:27:11.094291] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:04.288 [2024-12-10 12:27:11.094368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705030 ] 00:24:04.547 [2024-12-10 12:27:11.205279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.547 [2024-12-10 12:27:11.312398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.112 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.112 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:05.112 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j8KxG0QlrY 00:24:05.371 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:05.629 [2024-12-10 12:27:12.241552] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.629 nvme0n1 00:24:05.629 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.629 Running I/O for 1 seconds... 00:24:07.005 4472.00 IOPS, 17.47 MiB/s 00:24:07.005 Latency(us) 00:24:07.005 [2024-12-10T11:27:13.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.005 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:07.005 Verification LBA range: start 0x0 length 0x2000 00:24:07.005 nvme0n1 : 1.03 4467.26 17.45 0.00 0.00 28302.81 8176.40 29085.50 00:24:07.005 [2024-12-10T11:27:13.831Z] =================================================================================================================== 00:24:07.005 [2024-12-10T11:27:13.831Z] Total : 4467.26 17.45 0.00 0.00 28302.81 8176.40 29085.50 00:24:07.005 { 00:24:07.005 "results": [ 00:24:07.005 { 00:24:07.005 "job": "nvme0n1", 00:24:07.005 "core_mask": "0x2", 00:24:07.005 "workload": "verify", 00:24:07.005 "status": "finished", 00:24:07.005 "verify_range": { 00:24:07.005 "start": 0, 00:24:07.005 "length": 8192 00:24:07.005 }, 00:24:07.005 "queue_depth": 128, 00:24:07.005 "io_size": 4096, 00:24:07.005 "runtime": 1.029714, 00:24:07.005 "iops": 4467.2598410820865, 00:24:07.005 "mibps": 17.4502337542269, 00:24:07.005 "io_failed": 0, 00:24:07.005 "io_timeout": 0, 00:24:07.005 "avg_latency_us": 28302.807082401654, 00:24:07.005 "min_latency_us": 8176.396190476191, 00:24:07.005 "max_latency_us": 29085.500952380953 00:24:07.005 } 00:24:07.005 ], 00:24:07.005 "core_count": 1 00:24:07.005 } 00:24:07.005 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:07.005 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.005 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.005 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.005 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:07.005 "subsystems": [ 00:24:07.005 { 00:24:07.005 "subsystem": "keyring", 00:24:07.005 "config": [ 00:24:07.005 { 00:24:07.005 "method": "keyring_file_add_key", 00:24:07.005 "params": { 00:24:07.005 "name": "key0", 00:24:07.005 "path": "/tmp/tmp.j8KxG0QlrY" 00:24:07.005 } 00:24:07.005 } 00:24:07.005 ] 00:24:07.005 }, 00:24:07.005 { 00:24:07.005 "subsystem": "iobuf", 00:24:07.005 "config": [ 00:24:07.005 { 00:24:07.005 "method": "iobuf_set_options", 00:24:07.005 "params": { 00:24:07.005 "small_pool_count": 8192, 00:24:07.005 "large_pool_count": 1024, 00:24:07.005 "small_bufsize": 8192, 00:24:07.005 "large_bufsize": 135168, 00:24:07.005 "enable_numa": false 00:24:07.005 } 00:24:07.005 } 00:24:07.005 ] 00:24:07.005 }, 00:24:07.005 { 00:24:07.005 "subsystem": "sock", 00:24:07.005 "config": [ 00:24:07.005 { 00:24:07.005 "method": "sock_set_default_impl", 00:24:07.005 "params": { 00:24:07.005 "impl_name": "posix" 00:24:07.005 } 00:24:07.005 }, 00:24:07.005 { 00:24:07.005 "method": "sock_impl_set_options", 00:24:07.005 "params": { 00:24:07.005 "impl_name": "ssl", 00:24:07.005 "recv_buf_size": 4096, 00:24:07.005 "send_buf_size": 4096, 00:24:07.005 "enable_recv_pipe": true, 00:24:07.005 "enable_quickack": false, 00:24:07.005 "enable_placement_id": 0, 00:24:07.005 "enable_zerocopy_send_server": true, 00:24:07.005 "enable_zerocopy_send_client": false, 00:24:07.005 "zerocopy_threshold": 0, 00:24:07.005 "tls_version": 0, 00:24:07.005 "enable_ktls": false 00:24:07.005 } 00:24:07.005 }, 00:24:07.005 { 00:24:07.005 "method": "sock_impl_set_options", 00:24:07.005 "params": { 00:24:07.005 "impl_name": "posix", 00:24:07.005 "recv_buf_size": 2097152, 00:24:07.005 "send_buf_size": 2097152, 00:24:07.005 "enable_recv_pipe": true, 00:24:07.005 "enable_quickack": false, 00:24:07.005 "enable_placement_id": 0, 00:24:07.005 "enable_zerocopy_send_server": true, 00:24:07.005 "enable_zerocopy_send_client": false, 00:24:07.005 "zerocopy_threshold": 0, 00:24:07.005 "tls_version": 0, 00:24:07.005 "enable_ktls": false 00:24:07.005 } 00:24:07.005 } 00:24:07.005 ] 00:24:07.005 }, 00:24:07.005 { 00:24:07.005 "subsystem": "vmd", 00:24:07.005 "config": [] 00:24:07.005 }, 00:24:07.005 { 00:24:07.005 "subsystem": "accel", 00:24:07.005 "config": [ 00:24:07.005 { 00:24:07.005 "method": "accel_set_options", 00:24:07.005 "params": { 00:24:07.005 "small_cache_size": 128, 00:24:07.005 "large_cache_size": 16, 00:24:07.005 "task_count": 2048, 00:24:07.005 "sequence_count": 2048, 00:24:07.005 "buf_count": 2048 00:24:07.005 } 00:24:07.005 } 00:24:07.005 ] 00:24:07.005 }, 00:24:07.005 { 00:24:07.005 "subsystem": "bdev", 00:24:07.005 "config": [ 00:24:07.005 { 00:24:07.005 "method": "bdev_set_options", 00:24:07.005 "params": { 00:24:07.005 "bdev_io_pool_size": 65535, 00:24:07.005 "bdev_io_cache_size": 256, 00:24:07.005 "bdev_auto_examine": true, 00:24:07.005 "iobuf_small_cache_size": 128, 00:24:07.005 "iobuf_large_cache_size": 16 00:24:07.005 } 00:24:07.005 }, 00:24:07.005 { 00:24:07.006 "method": "bdev_raid_set_options", 00:24:07.006 "params": { 00:24:07.006 "process_window_size_kb": 1024, 00:24:07.006 "process_max_bandwidth_mb_sec": 0 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "bdev_iscsi_set_options", 00:24:07.006 "params": { 00:24:07.006 "timeout_sec": 30 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "bdev_nvme_set_options", 00:24:07.006 "params": { 00:24:07.006 "action_on_timeout": "none", 00:24:07.006 "timeout_us": 0, 00:24:07.006 "timeout_admin_us": 0, 00:24:07.006 "keep_alive_timeout_ms": 10000, 00:24:07.006 "arbitration_burst": 0, 00:24:07.006 "low_priority_weight": 0, 00:24:07.006 "medium_priority_weight": 0, 00:24:07.006 "high_priority_weight": 0, 00:24:07.006 "nvme_adminq_poll_period_us": 10000, 00:24:07.006 "nvme_ioq_poll_period_us": 0, 00:24:07.006 "io_queue_requests": 0, 00:24:07.006 "delay_cmd_submit": true, 00:24:07.006 "transport_retry_count": 4, 00:24:07.006 "bdev_retry_count": 3, 00:24:07.006 "transport_ack_timeout": 0, 00:24:07.006 "ctrlr_loss_timeout_sec": 0, 00:24:07.006 "reconnect_delay_sec": 0, 00:24:07.006 "fast_io_fail_timeout_sec": 0, 00:24:07.006 "disable_auto_failback": false, 00:24:07.006 "generate_uuids": false, 00:24:07.006 "transport_tos": 0, 00:24:07.006 "nvme_error_stat": false, 00:24:07.006 "rdma_srq_size": 0, 00:24:07.006 "io_path_stat": false, 00:24:07.006 "allow_accel_sequence": false, 00:24:07.006 "rdma_max_cq_size": 0, 00:24:07.006 "rdma_cm_event_timeout_ms": 0, 00:24:07.006 "dhchap_digests": [ 00:24:07.006 "sha256", 00:24:07.006 "sha384", 00:24:07.006 "sha512" 00:24:07.006 ], 00:24:07.006 "dhchap_dhgroups": [ 00:24:07.006 "null", 00:24:07.006 "ffdhe2048", 00:24:07.006 "ffdhe3072", 00:24:07.006 "ffdhe4096", 00:24:07.006 "ffdhe6144", 00:24:07.006 "ffdhe8192" 00:24:07.006 ] 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "bdev_nvme_set_hotplug", 00:24:07.006 "params": { 00:24:07.006 "period_us": 100000, 00:24:07.006 "enable": false 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "bdev_malloc_create", 00:24:07.006 "params": { 00:24:07.006 "name": "malloc0", 00:24:07.006 "num_blocks": 8192, 00:24:07.006 "block_size": 4096, 00:24:07.006 "physical_block_size": 4096, 00:24:07.006 "uuid": "c8715102-4a61-441e-96dd-a69805d02fb9", 00:24:07.006 "optimal_io_boundary": 0, 00:24:07.006 "md_size": 0, 00:24:07.006 "dif_type": 0, 00:24:07.006 "dif_is_head_of_md": false, 00:24:07.006 "dif_pi_format": 0 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "bdev_wait_for_examine" 00:24:07.006 } 00:24:07.006 ] 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "subsystem": "nbd", 00:24:07.006 "config": [] 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "subsystem": "scheduler", 00:24:07.006 "config": [ 00:24:07.006 { 00:24:07.006 "method": "framework_set_scheduler", 00:24:07.006 "params": { 00:24:07.006 "name": "static" 00:24:07.006 } 00:24:07.006 } 00:24:07.006 ] 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "subsystem": "nvmf", 00:24:07.006 "config": [ 00:24:07.006 { 00:24:07.006 "method": "nvmf_set_config", 00:24:07.006 "params": { 00:24:07.006 "discovery_filter": "match_any", 00:24:07.006 "admin_cmd_passthru": { 00:24:07.006 "identify_ctrlr": false 00:24:07.006 }, 00:24:07.006 "dhchap_digests": [ 00:24:07.006 "sha256", 00:24:07.006 "sha384", 00:24:07.006 "sha512" 00:24:07.006 ], 00:24:07.006 "dhchap_dhgroups": [ 00:24:07.006 "null", 00:24:07.006 "ffdhe2048", 00:24:07.006 "ffdhe3072", 00:24:07.006 "ffdhe4096", 00:24:07.006 "ffdhe6144", 00:24:07.006 "ffdhe8192" 00:24:07.006 ] 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "nvmf_set_max_subsystems", 00:24:07.006 "params": { 00:24:07.006 "max_subsystems": 1024 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "nvmf_set_crdt", 00:24:07.006 "params": { 00:24:07.006 "crdt1": 0, 00:24:07.006 "crdt2": 0, 00:24:07.006 "crdt3": 0 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "nvmf_create_transport", 00:24:07.006 "params": { 00:24:07.006 "trtype": "TCP", 00:24:07.006 "max_queue_depth": 128, 00:24:07.006 "max_io_qpairs_per_ctrlr": 127, 00:24:07.006 "in_capsule_data_size": 4096, 00:24:07.006 "max_io_size": 131072, 00:24:07.006 "io_unit_size": 131072, 00:24:07.006 "max_aq_depth": 128, 00:24:07.006 "num_shared_buffers": 511, 00:24:07.006 "buf_cache_size": 4294967295, 00:24:07.006 "dif_insert_or_strip": false, 00:24:07.006 "zcopy": false, 00:24:07.006 "c2h_success": false, 00:24:07.006 "sock_priority": 0, 00:24:07.006 "abort_timeout_sec": 1, 00:24:07.006 "ack_timeout": 0, 00:24:07.006 "data_wr_pool_size": 0 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "nvmf_create_subsystem", 00:24:07.006 "params": { 00:24:07.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.006 "allow_any_host": false, 00:24:07.006 "serial_number": "00000000000000000000", 00:24:07.006 "model_number": "SPDK bdev Controller", 00:24:07.006 "max_namespaces": 32, 00:24:07.006 "min_cntlid": 1, 00:24:07.006 "max_cntlid": 65519, 00:24:07.006 "ana_reporting": false 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "nvmf_subsystem_add_host", 00:24:07.006 "params": { 00:24:07.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.006 "host": "nqn.2016-06.io.spdk:host1", 00:24:07.006 "psk": "key0" 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "nvmf_subsystem_add_ns", 00:24:07.006 "params": { 00:24:07.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.006 "namespace": { 00:24:07.006 "nsid": 1, 00:24:07.006 "bdev_name": "malloc0", 00:24:07.006 "nguid": "C87151024A61441E96DDA69805D02FB9", 00:24:07.006 "uuid": "c8715102-4a61-441e-96dd-a69805d02fb9", 00:24:07.006 "no_auto_visible": false 00:24:07.006 } 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "nvmf_subsystem_add_listener", 00:24:07.006 "params": { 00:24:07.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.006 "listen_address": { 00:24:07.006 "trtype": "TCP", 00:24:07.006 "adrfam": "IPv4", 00:24:07.006 "traddr": "10.0.0.2", 00:24:07.006 "trsvcid": "4420" 00:24:07.006 }, 00:24:07.006 "secure_channel": false, 00:24:07.006 "sock_impl": "ssl" 00:24:07.006 } 00:24:07.006 } 00:24:07.006 ] 00:24:07.006 } 00:24:07.006 ] 00:24:07.006 }' 00:24:07.006 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:07.006 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:07.006 "subsystems": [ 00:24:07.006 { 00:24:07.006 "subsystem": "keyring", 00:24:07.006 "config": [ 00:24:07.006 { 00:24:07.006 "method": "keyring_file_add_key", 00:24:07.006 "params": { 00:24:07.006 "name": "key0", 00:24:07.006 "path": "/tmp/tmp.j8KxG0QlrY" 00:24:07.006 } 00:24:07.006 } 00:24:07.006 ] 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "subsystem": "iobuf", 00:24:07.006 "config": [ 00:24:07.006 { 00:24:07.006 "method": "iobuf_set_options", 00:24:07.006 "params": { 00:24:07.006 "small_pool_count": 8192, 00:24:07.006 "large_pool_count": 1024, 00:24:07.006 "small_bufsize": 8192, 00:24:07.006 "large_bufsize": 135168, 00:24:07.006 "enable_numa": false 00:24:07.006 } 00:24:07.006 } 00:24:07.006 ] 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "subsystem": "sock", 00:24:07.006 "config": [ 00:24:07.006 { 00:24:07.006 "method": "sock_set_default_impl", 00:24:07.006 "params": { 00:24:07.006 "impl_name": "posix" 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "sock_impl_set_options", 00:24:07.006 "params": { 00:24:07.006 "impl_name": "ssl", 00:24:07.006 "recv_buf_size": 4096, 00:24:07.006 "send_buf_size": 4096, 00:24:07.006 "enable_recv_pipe": true, 00:24:07.006 "enable_quickack": false, 00:24:07.006 "enable_placement_id": 0, 00:24:07.006 "enable_zerocopy_send_server": true, 00:24:07.006 "enable_zerocopy_send_client": false, 00:24:07.006 "zerocopy_threshold": 0, 00:24:07.006 "tls_version": 0, 00:24:07.006 "enable_ktls": false 00:24:07.006 } 00:24:07.006 }, 00:24:07.006 { 00:24:07.006 "method": "sock_impl_set_options", 00:24:07.006 "params": { 00:24:07.006 "impl_name": "posix", 00:24:07.006 "recv_buf_size": 2097152, 00:24:07.007 "send_buf_size": 2097152, 00:24:07.007 "enable_recv_pipe": true, 00:24:07.007 "enable_quickack": false, 00:24:07.007 "enable_placement_id": 0, 00:24:07.007 "enable_zerocopy_send_server": true, 00:24:07.007 "enable_zerocopy_send_client": false, 00:24:07.007 "zerocopy_threshold": 0, 00:24:07.007 "tls_version": 0, 00:24:07.007 "enable_ktls": false 00:24:07.007 } 00:24:07.007 } 00:24:07.007 ] 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "subsystem": "vmd", 00:24:07.007 "config": [] 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "subsystem": "accel", 00:24:07.007 "config": [ 00:24:07.007 { 00:24:07.007 "method": "accel_set_options", 00:24:07.007 "params": { 00:24:07.007 "small_cache_size": 128, 00:24:07.007 "large_cache_size": 16, 00:24:07.007 "task_count": 2048, 00:24:07.007 "sequence_count": 2048, 00:24:07.007 "buf_count": 2048 00:24:07.007 } 00:24:07.007 } 00:24:07.007 ] 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "subsystem": "bdev", 00:24:07.007 "config": [ 00:24:07.007 { 00:24:07.007 "method": "bdev_set_options", 00:24:07.007 "params": { 00:24:07.007 "bdev_io_pool_size": 65535, 00:24:07.007 "bdev_io_cache_size": 256, 00:24:07.007 "bdev_auto_examine": true, 00:24:07.007 "iobuf_small_cache_size": 128, 00:24:07.007 "iobuf_large_cache_size": 16 00:24:07.007 } 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "method": "bdev_raid_set_options", 00:24:07.007 "params": { 00:24:07.007 "process_window_size_kb": 1024, 00:24:07.007 "process_max_bandwidth_mb_sec": 0 00:24:07.007 } 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "method": "bdev_iscsi_set_options", 00:24:07.007 "params": { 00:24:07.007 "timeout_sec": 30 00:24:07.007 } 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "method": "bdev_nvme_set_options", 00:24:07.007 "params": { 00:24:07.007 "action_on_timeout": "none", 00:24:07.007 "timeout_us": 0, 00:24:07.007 "timeout_admin_us": 0, 00:24:07.007 "keep_alive_timeout_ms": 10000, 00:24:07.007 "arbitration_burst": 0, 00:24:07.007 "low_priority_weight": 0, 00:24:07.007 "medium_priority_weight": 0, 00:24:07.007 "high_priority_weight": 0, 00:24:07.007 "nvme_adminq_poll_period_us": 10000, 00:24:07.007 "nvme_ioq_poll_period_us": 0, 00:24:07.007 "io_queue_requests": 512, 00:24:07.007 "delay_cmd_submit": true, 00:24:07.007 "transport_retry_count": 4, 00:24:07.007 "bdev_retry_count": 3, 00:24:07.007 "transport_ack_timeout": 0, 00:24:07.007 "ctrlr_loss_timeout_sec": 0, 00:24:07.007 "reconnect_delay_sec": 0, 00:24:07.007 "fast_io_fail_timeout_sec": 0, 00:24:07.007 "disable_auto_failback": false, 00:24:07.007 "generate_uuids": false, 00:24:07.007 "transport_tos": 0, 00:24:07.007 "nvme_error_stat": false, 00:24:07.007 "rdma_srq_size": 0, 00:24:07.007 "io_path_stat": false, 00:24:07.007 "allow_accel_sequence": false, 00:24:07.007 "rdma_max_cq_size": 0, 00:24:07.007 "rdma_cm_event_timeout_ms": 0, 00:24:07.007 "dhchap_digests": [ 00:24:07.007 "sha256", 00:24:07.007 "sha384", 00:24:07.007 "sha512" 00:24:07.007 ], 00:24:07.007 "dhchap_dhgroups": [ 00:24:07.007 "null", 00:24:07.007 "ffdhe2048", 00:24:07.007 "ffdhe3072", 00:24:07.007 "ffdhe4096", 00:24:07.007 "ffdhe6144", 00:24:07.007 "ffdhe8192" 00:24:07.007 ] 00:24:07.007 } 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "method": "bdev_nvme_attach_controller", 00:24:07.007 "params": { 00:24:07.007 "name": "nvme0", 00:24:07.007 "trtype": "TCP", 00:24:07.007 "adrfam": "IPv4", 00:24:07.007 "traddr": "10.0.0.2", 00:24:07.007 "trsvcid": "4420", 00:24:07.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.007 "prchk_reftag": false, 00:24:07.007 "prchk_guard": false, 00:24:07.007 "ctrlr_loss_timeout_sec": 0, 00:24:07.007 "reconnect_delay_sec": 0, 00:24:07.007 "fast_io_fail_timeout_sec": 0, 00:24:07.007 "psk": "key0", 00:24:07.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.007 "hdgst": false, 00:24:07.007 "ddgst": false, 00:24:07.007 "multipath": "multipath" 00:24:07.007 } 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "method": "bdev_nvme_set_hotplug", 00:24:07.007 "params": { 00:24:07.007 "period_us": 100000, 00:24:07.007 "enable": false 00:24:07.007 } 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "method": "bdev_enable_histogram", 00:24:07.007 "params": { 00:24:07.007 "name": "nvme0n1", 00:24:07.007 "enable": true 00:24:07.007 } 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "method": "bdev_wait_for_examine" 00:24:07.007 } 00:24:07.007 ] 00:24:07.007 }, 00:24:07.007 { 00:24:07.007 "subsystem": "nbd", 00:24:07.007 "config": [] 00:24:07.007 } 00:24:07.007 ] 00:24:07.007 }' 00:24:07.007 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3705030 00:24:07.007 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3705030 ']' 00:24:07.007 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3705030 00:24:07.007 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:07.007 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.007 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3705030 00:24:07.265 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:07.265 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:07.265 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3705030' 00:24:07.265 killing process with pid 3705030 00:24:07.265 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3705030 00:24:07.265 Received shutdown signal, test time was about 1.000000 seconds 00:24:07.265 00:24:07.265 Latency(us) 00:24:07.265 [2024-12-10T11:27:14.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.265 [2024-12-10T11:27:14.091Z] =================================================================================================================== 00:24:07.265 [2024-12-10T11:27:14.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.265 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3705030 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3704901 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3704901 ']' 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3704901 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704901 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.200 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704901' 00:24:08.201 killing process with pid 3704901 00:24:08.201 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3704901 00:24:08.201 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3704901 00:24:09.575 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:09.575 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.575 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.575 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.575 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:09.575 "subsystems": [ 00:24:09.575 { 00:24:09.575 "subsystem": "keyring", 00:24:09.575 "config": [ 00:24:09.575 { 00:24:09.575 "method": "keyring_file_add_key", 00:24:09.575 "params": { 00:24:09.575 "name": "key0", 00:24:09.575 "path": "/tmp/tmp.j8KxG0QlrY" 00:24:09.575 } 00:24:09.575 } 00:24:09.575 ] 00:24:09.575 }, 00:24:09.575 { 00:24:09.575 "subsystem": "iobuf", 00:24:09.575 "config": [ 00:24:09.575 { 00:24:09.575 "method": "iobuf_set_options", 00:24:09.575 "params": { 00:24:09.575 "small_pool_count": 8192, 00:24:09.575 "large_pool_count": 1024, 00:24:09.575 "small_bufsize": 8192, 00:24:09.575 "large_bufsize": 135168, 00:24:09.575 "enable_numa": false 00:24:09.575 } 00:24:09.575 } 00:24:09.575 ] 00:24:09.575 }, 00:24:09.575 { 00:24:09.575 "subsystem": "sock", 00:24:09.575 "config": [ 00:24:09.575 { 00:24:09.575 "method": "sock_set_default_impl", 00:24:09.575 "params": { 00:24:09.575 "impl_name": "posix" 00:24:09.575 } 00:24:09.575 }, 00:24:09.575 { 00:24:09.575 "method": "sock_impl_set_options", 00:24:09.575 "params": { 00:24:09.575 "impl_name": "ssl", 00:24:09.575 "recv_buf_size": 4096, 00:24:09.575 "send_buf_size": 4096, 00:24:09.575 "enable_recv_pipe": true, 00:24:09.575 "enable_quickack": false, 00:24:09.575 "enable_placement_id": 0, 00:24:09.575 "enable_zerocopy_send_server": true, 00:24:09.576 "enable_zerocopy_send_client": false, 00:24:09.576 "zerocopy_threshold": 0, 00:24:09.576 "tls_version": 0, 00:24:09.576 "enable_ktls": false 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "sock_impl_set_options", 00:24:09.576 "params": { 00:24:09.576 "impl_name": "posix", 00:24:09.576 "recv_buf_size": 2097152, 00:24:09.576 "send_buf_size": 2097152, 00:24:09.576 "enable_recv_pipe": true, 00:24:09.576 "enable_quickack": false, 00:24:09.576 "enable_placement_id": 0, 00:24:09.576 "enable_zerocopy_send_server": true, 00:24:09.576 "enable_zerocopy_send_client": false, 00:24:09.576 "zerocopy_threshold": 0, 00:24:09.576 "tls_version": 0, 00:24:09.576 "enable_ktls": false 00:24:09.576 } 00:24:09.576 } 00:24:09.576 ] 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "subsystem": "vmd", 00:24:09.576 "config": [] 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "subsystem": "accel", 00:24:09.576 "config": [ 00:24:09.576 { 00:24:09.576 "method": "accel_set_options", 00:24:09.576 "params": { 00:24:09.576 "small_cache_size": 128, 00:24:09.576 "large_cache_size": 16, 00:24:09.576 "task_count": 2048, 00:24:09.576 "sequence_count": 2048, 00:24:09.576 "buf_count": 2048 00:24:09.576 } 00:24:09.576 } 00:24:09.576 ] 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "subsystem": "bdev", 00:24:09.576 "config": [ 00:24:09.576 { 00:24:09.576 "method": "bdev_set_options", 00:24:09.576 "params": { 00:24:09.576 "bdev_io_pool_size": 65535, 00:24:09.576 "bdev_io_cache_size": 256, 00:24:09.576 "bdev_auto_examine": true, 00:24:09.576 "iobuf_small_cache_size": 128, 00:24:09.576 "iobuf_large_cache_size": 16 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "bdev_raid_set_options", 00:24:09.576 "params": { 00:24:09.576 "process_window_size_kb": 1024, 00:24:09.576 "process_max_bandwidth_mb_sec": 0 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "bdev_iscsi_set_options", 00:24:09.576 "params": { 00:24:09.576 "timeout_sec": 30 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "bdev_nvme_set_options", 00:24:09.576 "params": { 00:24:09.576 "action_on_timeout": "none", 00:24:09.576 "timeout_us": 0, 00:24:09.576 "timeout_admin_us": 0, 00:24:09.576 "keep_alive_timeout_ms": 10000, 00:24:09.576 "arbitration_burst": 0, 00:24:09.576 "low_priority_weight": 0, 00:24:09.576 "medium_priority_weight": 0, 00:24:09.576 "high_priority_weight": 0, 00:24:09.576 "nvme_adminq_poll_period_us": 10000, 00:24:09.576 "nvme_ioq_poll_period_us": 0, 00:24:09.576 "io_queue_requests": 0, 00:24:09.576 "delay_cmd_submit": true, 00:24:09.576 "transport_retry_count": 4, 00:24:09.576 "bdev_retry_count": 3, 00:24:09.576 "transport_ack_timeout": 0, 00:24:09.576 "ctrlr_loss_timeout_sec": 0, 00:24:09.576 "reconnect_delay_sec": 0, 00:24:09.576 "fast_io_fail_timeout_sec": 0, 00:24:09.576 "disable_auto_failback": false, 00:24:09.576 "generate_uuids": false, 00:24:09.576 "transport_tos": 0, 00:24:09.576 "nvme_error_stat": false, 00:24:09.576 "rdma_srq_size": 0, 00:24:09.576 "io_path_stat": false, 00:24:09.576 "allow_accel_sequence": false, 00:24:09.576 "rdma_max_cq_size": 0, 00:24:09.576 "rdma_cm_event_timeout_ms": 0, 00:24:09.576 "dhchap_digests": [ 00:24:09.576 "sha256", 00:24:09.576 "sha384", 00:24:09.576 "sha512" 00:24:09.576 ], 00:24:09.576 "dhchap_dhgroups": [ 00:24:09.576 "null", 00:24:09.576 "ffdhe2048", 00:24:09.576 "ffdhe3072", 00:24:09.576 "ffdhe4096", 00:24:09.576 "ffdhe6144", 00:24:09.576 "ffdhe8192" 00:24:09.576 ] 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "bdev_nvme_set_hotplug", 00:24:09.576 "params": { 00:24:09.576 "period_us": 100000, 00:24:09.576 "enable": false 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "bdev_malloc_create", 00:24:09.576 "params": { 00:24:09.576 "name": "malloc0", 00:24:09.576 "num_blocks": 8192, 00:24:09.576 "block_size": 4096, 00:24:09.576 "physical_block_size": 4096, 00:24:09.576 "uuid": "c8715102-4a61-441e-96dd-a69805d02fb9", 00:24:09.576 "optimal_io_boundary": 0, 00:24:09.576 "md_size": 0, 00:24:09.576 "dif_type": 0, 00:24:09.576 "dif_is_head_of_md": false, 00:24:09.576 "dif_pi_format": 0 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "bdev_wait_for_examine" 00:24:09.576 } 00:24:09.576 ] 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "subsystem": "nbd", 00:24:09.576 "config": [] 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "subsystem": "scheduler", 00:24:09.576 "config": [ 00:24:09.576 { 00:24:09.576 "method": "framework_set_scheduler", 00:24:09.576 "params": { 00:24:09.576 "name": "static" 00:24:09.576 } 00:24:09.576 } 00:24:09.576 ] 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "subsystem": "nvmf", 00:24:09.576 "config": [ 00:24:09.576 { 00:24:09.576 "method": "nvmf_set_config", 00:24:09.576 "params": { 00:24:09.576 "discovery_filter": "match_any", 00:24:09.576 "admin_cmd_passthru": { 00:24:09.576 "identify_ctrlr": false 00:24:09.576 }, 00:24:09.576 "dhchap_digests": [ 00:24:09.576 "sha256", 00:24:09.576 "sha384", 00:24:09.576 "sha512" 00:24:09.576 ], 00:24:09.576 "dhchap_dhgroups": [ 00:24:09.576 "null", 00:24:09.576 "ffdhe2048", 00:24:09.576 "ffdhe3072", 00:24:09.576 "ffdhe4096", 00:24:09.576 "ffdhe6144", 00:24:09.576 "ffdhe8192" 00:24:09.576 ] 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "nvmf_set_max_subsystems", 00:24:09.576 "params": { 00:24:09.576 "max_subsystems": 1024 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "nvmf_set_crdt", 00:24:09.576 "params": { 00:24:09.576 "crdt1": 0, 00:24:09.576 "crdt2": 0, 00:24:09.576 "crdt3": 0 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "nvmf_create_transport", 00:24:09.576 "params": { 00:24:09.576 "trtype": "TCP", 00:24:09.576 "max_queue_depth": 128, 00:24:09.576 "max_io_qpairs_per_ctrlr": 127, 00:24:09.576 "in_capsule_data_size": 4096, 00:24:09.576 "max_io_size": 131072, 00:24:09.576 "io_unit_size": 131072, 00:24:09.576 "max_aq_depth": 128, 00:24:09.576 "num_shared_buffers": 511, 00:24:09.576 "buf_cache_size": 4294967295, 00:24:09.576 "dif_insert_or_strip": false, 00:24:09.576 "zcopy": false, 00:24:09.576 "c2h_success": false, 00:24:09.576 "sock_priority": 0, 00:24:09.576 "abort_timeout_sec": 1, 00:24:09.576 "ack_timeout": 0, 00:24:09.576 "data_wr_pool_size": 0 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "nvmf_create_subsystem", 00:24:09.576 "params": { 00:24:09.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.576 "allow_any_host": false, 00:24:09.576 "serial_number": "00000000000000000000", 00:24:09.576 "model_number": "SPDK bdev Controller", 00:24:09.576 "max_namespaces": 32, 00:24:09.576 "min_cntlid": 1, 00:24:09.576 "max_cntlid": 65519, 00:24:09.576 "ana_reporting": false 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "nvmf_subsystem_add_host", 00:24:09.576 "params": { 00:24:09.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.576 "host": "nqn.2016-06.io.spdk:host1", 00:24:09.576 "psk": "key0" 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "nvmf_subsystem_add_ns", 00:24:09.576 "params": { 00:24:09.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.576 "namespace": { 00:24:09.576 "nsid": 1, 00:24:09.576 "bdev_name": "malloc0", 00:24:09.576 "nguid": "C87151024A61441E96DDA69805D02FB9", 00:24:09.576 "uuid": "c8715102-4a61-441e-96dd-a69805d02fb9", 00:24:09.576 "no_auto_visible": false 00:24:09.576 } 00:24:09.576 } 00:24:09.576 }, 00:24:09.576 { 00:24:09.576 "method": "nvmf_subsystem_add_listener", 00:24:09.576 "params": { 00:24:09.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.576 "listen_address": { 00:24:09.576 "trtype": "TCP", 00:24:09.576 "adrfam": "IPv4", 00:24:09.576 "traddr": "10.0.0.2", 00:24:09.576 "trsvcid": "4420" 00:24:09.576 }, 00:24:09.576 "secure_channel": false, 00:24:09.576 "sock_impl": "ssl" 00:24:09.576 } 00:24:09.576 } 00:24:09.576 ] 00:24:09.576 } 00:24:09.576 ] 00:24:09.576 }' 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3705936 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3705936 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3705936 ']' 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.576 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:09.576 [2024-12-10 12:27:16.076613] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:09.577 [2024-12-10 12:27:16.076700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.577 [2024-12-10 12:27:16.192237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.577 [2024-12-10 12:27:16.296711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.577 [2024-12-10 12:27:16.296756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.577 [2024-12-10 12:27:16.296766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.577 [2024-12-10 12:27:16.296775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.577 [2024-12-10 12:27:16.296783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.577 [2024-12-10 12:27:16.298305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.144 [2024-12-10 12:27:16.788889] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.144 [2024-12-10 12:27:16.820942] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.144 [2024-12-10 12:27:16.821187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.144 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.144 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:10.144 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.144 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.144 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.144 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.144 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3706012 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3706012 /var/tmp/bdevperf.sock 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3706012 ']' 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:10.145 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:10.145 "subsystems": [ 00:24:10.145 { 00:24:10.145 "subsystem": "keyring", 00:24:10.145 "config": [ 00:24:10.145 { 00:24:10.145 "method": "keyring_file_add_key", 00:24:10.145 "params": { 00:24:10.145 "name": "key0", 00:24:10.145 "path": "/tmp/tmp.j8KxG0QlrY" 00:24:10.145 } 00:24:10.145 } 00:24:10.145 ] 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "subsystem": "iobuf", 00:24:10.145 "config": [ 00:24:10.145 { 00:24:10.145 "method": "iobuf_set_options", 00:24:10.145 "params": { 00:24:10.145 "small_pool_count": 8192, 00:24:10.145 "large_pool_count": 1024, 00:24:10.145 "small_bufsize": 8192, 00:24:10.145 "large_bufsize": 135168, 00:24:10.145 "enable_numa": false 00:24:10.145 } 00:24:10.145 } 00:24:10.145 ] 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "subsystem": "sock", 00:24:10.145 "config": [ 00:24:10.145 { 00:24:10.145 "method": "sock_set_default_impl", 00:24:10.145 "params": { 00:24:10.145 "impl_name": "posix" 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "sock_impl_set_options", 00:24:10.145 "params": { 00:24:10.145 "impl_name": "ssl", 00:24:10.145 "recv_buf_size": 4096, 00:24:10.145 "send_buf_size": 4096, 00:24:10.145 "enable_recv_pipe": true, 00:24:10.145 "enable_quickack": false, 00:24:10.145 "enable_placement_id": 0, 00:24:10.145 "enable_zerocopy_send_server": true, 00:24:10.145 "enable_zerocopy_send_client": false, 00:24:10.145 "zerocopy_threshold": 0, 00:24:10.145 "tls_version": 0, 00:24:10.145 "enable_ktls": false 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "sock_impl_set_options", 00:24:10.145 "params": { 00:24:10.145 "impl_name": "posix", 00:24:10.145 "recv_buf_size": 2097152, 00:24:10.145 "send_buf_size": 2097152, 00:24:10.145 "enable_recv_pipe": true, 00:24:10.145 "enable_quickack": false, 00:24:10.145 "enable_placement_id": 0, 00:24:10.145 "enable_zerocopy_send_server": true, 00:24:10.145 "enable_zerocopy_send_client": false, 00:24:10.145 "zerocopy_threshold": 0, 00:24:10.145 "tls_version": 0, 00:24:10.145 "enable_ktls": false 00:24:10.145 } 00:24:10.145 } 00:24:10.145 ] 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "subsystem": "vmd", 00:24:10.145 "config": [] 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "subsystem": "accel", 00:24:10.145 "config": [ 00:24:10.145 { 00:24:10.145 "method": "accel_set_options", 00:24:10.145 "params": { 00:24:10.145 "small_cache_size": 128, 00:24:10.145 "large_cache_size": 16, 00:24:10.145 "task_count": 2048, 00:24:10.145 "sequence_count": 2048, 00:24:10.145 "buf_count": 2048 00:24:10.145 } 00:24:10.145 } 00:24:10.145 ] 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "subsystem": "bdev", 00:24:10.145 "config": [ 00:24:10.145 { 00:24:10.145 "method": "bdev_set_options", 00:24:10.145 "params": { 00:24:10.145 "bdev_io_pool_size": 65535, 00:24:10.145 "bdev_io_cache_size": 256, 00:24:10.145 "bdev_auto_examine": true, 00:24:10.145 "iobuf_small_cache_size": 128, 00:24:10.145 "iobuf_large_cache_size": 16 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "bdev_raid_set_options", 00:24:10.145 "params": { 00:24:10.145 "process_window_size_kb": 1024, 00:24:10.145 "process_max_bandwidth_mb_sec": 0 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "bdev_iscsi_set_options", 00:24:10.145 "params": { 00:24:10.145 "timeout_sec": 30 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "bdev_nvme_set_options", 00:24:10.145 "params": { 00:24:10.145 "action_on_timeout": "none", 00:24:10.145 "timeout_us": 0, 00:24:10.145 "timeout_admin_us": 0, 00:24:10.145 "keep_alive_timeout_ms": 10000, 00:24:10.145 "arbitration_burst": 0, 00:24:10.145 "low_priority_weight": 0, 00:24:10.145 "medium_priority_weight": 0, 00:24:10.145 "high_priority_weight": 0, 00:24:10.145 "nvme_adminq_poll_period_us": 10000, 00:24:10.145 "nvme_ioq_poll_period_us": 0, 00:24:10.145 "io_queue_requests": 512, 00:24:10.145 "delay_cmd_submit": true, 00:24:10.145 "transport_retry_count": 4, 00:24:10.145 "bdev_retry_count": 3, 00:24:10.145 "transport_ack_timeout": 0, 00:24:10.145 "ctrlr_loss_timeout_sec": 0, 00:24:10.145 "reconnect_delay_sec": 0, 00:24:10.145 "fast_io_fail_timeout_sec": 0, 00:24:10.145 "disable_auto_failback": false, 00:24:10.145 "generate_uuids": false, 00:24:10.145 "transport_tos": 0, 00:24:10.145 "nvme_error_stat": false, 00:24:10.145 "rdma_srq_size": 0, 00:24:10.145 "io_path_stat": false, 00:24:10.145 "allow_accel_sequence": false, 00:24:10.145 "rdma_max_cq_size": 0, 00:24:10.145 "rdma_cm_event_timeout_ms": 0, 00:24:10.145 "dhchap_digests": [ 00:24:10.145 "sha256", 00:24:10.145 "sha384", 00:24:10.145 "sha512" 00:24:10.145 ], 00:24:10.145 "dhchap_dhgroups": [ 00:24:10.145 "null", 00:24:10.145 "ffdhe2048", 00:24:10.145 "ffdhe3072", 00:24:10.145 "ffdhe4096", 00:24:10.145 "ffdhe6144", 00:24:10.145 "ffdhe8192" 00:24:10.145 ] 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "bdev_nvme_attach_controller", 00:24:10.145 "params": { 00:24:10.145 "name": "nvme0", 00:24:10.145 "trtype": "TCP", 00:24:10.145 "adrfam": "IPv4", 00:24:10.145 "traddr": "10.0.0.2", 00:24:10.145 "trsvcid": "4420", 00:24:10.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.145 "prchk_reftag": false, 00:24:10.145 "prchk_guard": false, 00:24:10.145 "ctrlr_loss_timeout_sec": 0, 00:24:10.145 "reconnect_delay_sec": 0, 00:24:10.145 "fast_io_fail_timeout_sec": 0, 00:24:10.145 "psk": "key0", 00:24:10.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.145 "hdgst": false, 00:24:10.145 "ddgst": false, 00:24:10.145 "multipath": "multipath" 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "bdev_nvme_set_hotplug", 00:24:10.145 "params": { 00:24:10.145 "period_us": 100000, 00:24:10.145 "enable": false 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "bdev_enable_histogram", 00:24:10.145 "params": { 00:24:10.145 "name": "nvme0n1", 00:24:10.145 "enable": true 00:24:10.145 } 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "method": "bdev_wait_for_examine" 00:24:10.145 } 00:24:10.145 ] 00:24:10.145 }, 00:24:10.145 { 00:24:10.145 "subsystem": "nbd", 00:24:10.145 "config": [] 00:24:10.145 } 00:24:10.145 ] 00:24:10.145 }' 00:24:10.145 [2024-12-10 12:27:16.969504] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:10.145 [2024-12-10 12:27:16.969597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706012 ] 00:24:10.404 [2024-12-10 12:27:17.081270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.404 [2024-12-10 12:27:17.188465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.971 [2024-12-10 12:27:17.590446] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.971 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.971 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:10.971 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:10.971 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.229 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.229 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:11.487 Running I/O for 1 seconds... 00:24:12.422 4363.00 IOPS, 17.04 MiB/s 00:24:12.422 Latency(us) 00:24:12.422 [2024-12-10T11:27:19.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.422 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:12.422 Verification LBA range: start 0x0 length 0x2000 00:24:12.422 nvme0n1 : 1.02 4418.80 17.26 0.00 0.00 28735.23 6272.73 26838.55 00:24:12.422 [2024-12-10T11:27:19.248Z] =================================================================================================================== 00:24:12.422 [2024-12-10T11:27:19.248Z] Total : 4418.80 17.26 0.00 0.00 28735.23 6272.73 26838.55 00:24:12.422 { 00:24:12.422 "results": [ 00:24:12.422 { 00:24:12.422 "job": "nvme0n1", 00:24:12.422 "core_mask": "0x2", 00:24:12.422 "workload": "verify", 00:24:12.422 "status": "finished", 00:24:12.422 "verify_range": { 00:24:12.422 "start": 0, 00:24:12.422 "length": 8192 00:24:12.422 }, 00:24:12.422 "queue_depth": 128, 00:24:12.422 "io_size": 4096, 00:24:12.422 "runtime": 1.016339, 00:24:12.422 "iops": 4418.801207077559, 00:24:12.422 "mibps": 17.260942215146716, 00:24:12.422 "io_failed": 0, 00:24:12.422 "io_timeout": 0, 00:24:12.422 "avg_latency_us": 28735.23347393199, 00:24:12.422 "min_latency_us": 6272.731428571428, 00:24:12.422 "max_latency_us": 26838.55238095238 00:24:12.422 } 00:24:12.422 ], 00:24:12.422 "core_count": 1 00:24:12.422 } 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:12.422 nvmf_trace.0 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3706012 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3706012 ']' 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3706012 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:12.422 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.423 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3706012 00:24:12.423 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:12.423 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:12.423 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3706012' 00:24:12.423 killing process with pid 3706012 00:24:12.423 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3706012 00:24:12.423 Received shutdown signal, test time was about 1.000000 seconds 00:24:12.423 00:24:12.423 Latency(us) 00:24:12.423 [2024-12-10T11:27:19.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.423 [2024-12-10T11:27:19.249Z] =================================================================================================================== 00:24:12.423 [2024-12-10T11:27:19.249Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.423 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3706012 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.357 rmmod nvme_tcp 00:24:13.357 rmmod nvme_fabrics 00:24:13.357 rmmod nvme_keyring 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3705936 ']' 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3705936 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3705936 ']' 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3705936 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.357 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3705936 00:24:13.614 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.614 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.614 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3705936' 00:24:13.614 killing process with pid 3705936 00:24:13.614 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3705936 00:24:13.614 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3705936 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.988 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.K7A6pHWw9I /tmp/tmp.Gwx0sCgCQC /tmp/tmp.j8KxG0QlrY 00:24:16.891 00:24:16.891 real 1m45.824s 00:24:16.891 user 2m44.373s 00:24:16.891 sys 0m30.850s 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.891 ************************************ 00:24:16.891 END TEST nvmf_tls 00:24:16.891 ************************************ 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.891 ************************************ 00:24:16.891 START TEST nvmf_fips 00:24:16.891 ************************************ 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:16.891 * Looking for test storage... 00:24:16.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.891 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:17.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.151 --rc genhtml_branch_coverage=1 00:24:17.151 --rc genhtml_function_coverage=1 00:24:17.151 --rc genhtml_legend=1 00:24:17.151 --rc geninfo_all_blocks=1 00:24:17.151 --rc geninfo_unexecuted_blocks=1 00:24:17.151 00:24:17.151 ' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:17.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.151 --rc genhtml_branch_coverage=1 00:24:17.151 --rc genhtml_function_coverage=1 00:24:17.151 --rc genhtml_legend=1 00:24:17.151 --rc geninfo_all_blocks=1 00:24:17.151 --rc geninfo_unexecuted_blocks=1 00:24:17.151 00:24:17.151 ' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:17.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.151 --rc genhtml_branch_coverage=1 00:24:17.151 --rc genhtml_function_coverage=1 00:24:17.151 --rc genhtml_legend=1 00:24:17.151 --rc geninfo_all_blocks=1 00:24:17.151 --rc geninfo_unexecuted_blocks=1 00:24:17.151 00:24:17.151 ' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:17.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.151 --rc genhtml_branch_coverage=1 00:24:17.151 --rc genhtml_function_coverage=1 00:24:17.151 --rc genhtml_legend=1 00:24:17.151 --rc geninfo_all_blocks=1 00:24:17.151 --rc geninfo_unexecuted_blocks=1 00:24:17.151 00:24:17.151 ' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:17.151 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:17.152 Error setting digest 00:24:17.152 40E278824A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:17.152 40E278824A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.152 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:22.419 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.419 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.419 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.419 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.419 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.419 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.419 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:22.420 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:22.420 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:22.420 Found net devices under 0000:af:00.0: cvl_0_0 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:22.420 Found net devices under 0000:af:00.1: cvl_0_1 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.420 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.679 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.679 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.679 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.679 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:24:22.679 00:24:22.679 --- 10.0.0.2 ping statistics --- 00:24:22.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.679 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:24:22.679 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:22.680 00:24:22.680 --- 10.0.0.1 ping statistics --- 00:24:22.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.680 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3710148 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3710148 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3710148 ']' 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.680 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:22.680 [2024-12-10 12:27:29.424691] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:22.680 [2024-12-10 12:27:29.424786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.938 [2024-12-10 12:27:29.539603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.938 [2024-12-10 12:27:29.643082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.938 [2024-12-10 12:27:29.643125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.938 [2024-12-10 12:27:29.643135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.938 [2024-12-10 12:27:29.643145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.938 [2024-12-10 12:27:29.643152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.938 [2024-12-10 12:27:29.644636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Dzn 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Dzn 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Dzn 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Dzn 00:24:23.505 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:23.763 [2024-12-10 12:27:30.432886] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.763 [2024-12-10 12:27:30.448877] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:23.763 [2024-12-10 12:27:30.449104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.763 malloc0 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3710392 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3710392 /var/tmp/bdevperf.sock 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3710392 ']' 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.763 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:24.021 [2024-12-10 12:27:30.639559] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:24.021 [2024-12-10 12:27:30.639646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710392 ] 00:24:24.021 [2024-12-10 12:27:30.746350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.279 [2024-12-10 12:27:30.850292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.846 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.846 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:24.846 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Dzn 00:24:24.846 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.103 [2024-12-10 12:27:31.782966] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.103 TLSTESTn1 00:24:25.103 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.369 Running I/O for 10 seconds... 00:24:27.341 4553.00 IOPS, 17.79 MiB/s [2024-12-10T11:27:35.102Z] 4608.00 IOPS, 18.00 MiB/s [2024-12-10T11:27:36.036Z] 4619.67 IOPS, 18.05 MiB/s [2024-12-10T11:27:37.410Z] 4641.75 IOPS, 18.13 MiB/s [2024-12-10T11:27:38.345Z] 4655.80 IOPS, 18.19 MiB/s [2024-12-10T11:27:39.280Z] 4639.00 IOPS, 18.12 MiB/s [2024-12-10T11:27:40.214Z] 4612.71 IOPS, 18.02 MiB/s [2024-12-10T11:27:41.149Z] 4593.00 IOPS, 17.94 MiB/s [2024-12-10T11:27:42.083Z] 4607.78 IOPS, 18.00 MiB/s [2024-12-10T11:27:42.083Z] 4609.40 IOPS, 18.01 MiB/s 00:24:35.257 Latency(us) 00:24:35.257 [2024-12-10T11:27:42.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.257 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:35.257 Verification LBA range: start 0x0 length 0x2000 00:24:35.257 TLSTESTn1 : 10.01 4615.46 18.03 0.00 0.00 27690.92 5742.20 26713.72 00:24:35.257 [2024-12-10T11:27:42.083Z] =================================================================================================================== 00:24:35.257 [2024-12-10T11:27:42.083Z] Total : 4615.46 18.03 0.00 0.00 27690.92 5742.20 26713.72 00:24:35.257 { 00:24:35.257 "results": [ 00:24:35.257 { 00:24:35.257 "job": "TLSTESTn1", 00:24:35.257 "core_mask": "0x4", 00:24:35.257 "workload": "verify", 00:24:35.257 "status": "finished", 00:24:35.257 "verify_range": { 00:24:35.257 "start": 0, 00:24:35.257 "length": 8192 00:24:35.257 }, 00:24:35.257 "queue_depth": 128, 00:24:35.257 "io_size": 4096, 00:24:35.257 "runtime": 10.014383, 00:24:35.257 "iops": 4615.461581607175, 00:24:35.257 "mibps": 18.029146803153026, 00:24:35.257 "io_failed": 0, 00:24:35.257 "io_timeout": 0, 00:24:35.257 "avg_latency_us": 27690.91516027038, 00:24:35.257 "min_latency_us": 5742.201904761905, 00:24:35.257 "max_latency_us": 26713.721904761904 00:24:35.257 } 00:24:35.257 ], 00:24:35.257 "core_count": 1 00:24:35.257 } 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:35.257 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:35.257 nvmf_trace.0 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3710392 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3710392 ']' 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3710392 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3710392 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3710392' 00:24:35.516 killing process with pid 3710392 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3710392 00:24:35.516 Received shutdown signal, test time was about 10.000000 seconds 00:24:35.516 00:24:35.516 Latency(us) 00:24:35.516 [2024-12-10T11:27:42.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.516 [2024-12-10T11:27:42.342Z] =================================================================================================================== 00:24:35.516 [2024-12-10T11:27:42.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.516 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3710392 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.450 rmmod nvme_tcp 00:24:36.450 rmmod nvme_fabrics 00:24:36.450 rmmod nvme_keyring 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3710148 ']' 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3710148 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3710148 ']' 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3710148 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3710148 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3710148' 00:24:36.450 killing process with pid 3710148 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3710148 00:24:36.450 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3710148 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.825 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Dzn 00:24:39.727 00:24:39.727 real 0m22.916s 00:24:39.727 user 0m26.285s 00:24:39.727 sys 0m8.763s 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:39.727 ************************************ 00:24:39.727 END TEST nvmf_fips 00:24:39.727 ************************************ 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.727 ************************************ 00:24:39.727 START TEST nvmf_control_msg_list 00:24:39.727 ************************************ 00:24:39.727 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:39.986 * Looking for test storage... 00:24:39.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.986 --rc genhtml_branch_coverage=1 00:24:39.986 --rc genhtml_function_coverage=1 00:24:39.986 --rc genhtml_legend=1 00:24:39.986 --rc geninfo_all_blocks=1 00:24:39.986 --rc geninfo_unexecuted_blocks=1 00:24:39.986 00:24:39.986 ' 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.986 --rc genhtml_branch_coverage=1 00:24:39.986 --rc genhtml_function_coverage=1 00:24:39.986 --rc genhtml_legend=1 00:24:39.986 --rc geninfo_all_blocks=1 00:24:39.986 --rc geninfo_unexecuted_blocks=1 00:24:39.986 00:24:39.986 ' 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.986 --rc genhtml_branch_coverage=1 00:24:39.986 --rc genhtml_function_coverage=1 00:24:39.986 --rc genhtml_legend=1 00:24:39.986 --rc geninfo_all_blocks=1 00:24:39.986 --rc geninfo_unexecuted_blocks=1 00:24:39.986 00:24:39.986 ' 00:24:39.986 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.986 --rc genhtml_branch_coverage=1 00:24:39.986 --rc genhtml_function_coverage=1 00:24:39.986 --rc genhtml_legend=1 00:24:39.986 --rc geninfo_all_blocks=1 00:24:39.986 --rc geninfo_unexecuted_blocks=1 00:24:39.986 00:24:39.986 ' 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.987 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:45.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:45.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:45.253 Found net devices under 0000:af:00.0: cvl_0_0 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.253 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:45.254 Found net devices under 0000:af:00.1: cvl_0_1 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.254 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:24:45.254 00:24:45.254 --- 10.0.0.2 ping statistics --- 00:24:45.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.254 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:24:45.254 00:24:45.254 --- 10.0.0.1 ping statistics --- 00:24:45.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.254 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3716006 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3716006 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3716006 ']' 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.254 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.513 [2024-12-10 12:27:52.141483] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:45.513 [2024-12-10 12:27:52.141589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.513 [2024-12-10 12:27:52.257240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.771 [2024-12-10 12:27:52.357378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.771 [2024-12-10 12:27:52.357419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.771 [2024-12-10 12:27:52.357429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.771 [2024-12-10 12:27:52.357440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.771 [2024-12-10 12:27:52.357447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.771 [2024-12-10 12:27:52.358600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:46.338 [2024-12-10 12:27:52.979374] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.338 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:46.338 Malloc0 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:46.338 [2024-12-10 12:27:53.051964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3716120 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3716121 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3716122 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3716120 00:24:46.338 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:46.338 [2024-12-10 12:27:53.162940] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:46.596 [2024-12-10 12:27:53.173369] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:46.596 [2024-12-10 12:27:53.173581] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:47.530 Initializing NVMe Controllers 00:24:47.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:47.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:47.530 Initialization complete. Launching workers. 00:24:47.530 ======================================================== 00:24:47.530 Latency(us) 00:24:47.530 Device Information : IOPS MiB/s Average min max 00:24:47.530 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5400.00 21.09 184.74 154.88 41135.73 00:24:47.530 ======================================================== 00:24:47.530 Total : 5400.00 21.09 184.74 154.88 41135.73 00:24:47.530 00:24:47.530 Initializing NVMe Controllers 00:24:47.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:47.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:47.530 Initialization complete. Launching workers. 00:24:47.530 ======================================================== 00:24:47.530 Latency(us) 00:24:47.530 Device Information : IOPS MiB/s Average min max 00:24:47.530 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40876.66 40236.45 41015.22 00:24:47.530 ======================================================== 00:24:47.530 Total : 25.00 0.10 40876.66 40236.45 41015.22 00:24:47.530 00:24:47.530 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3716121 00:24:47.530 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3716122 00:24:47.788 Initializing NVMe Controllers 00:24:47.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:47.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:47.788 Initialization complete. Launching workers. 00:24:47.788 ======================================================== 00:24:47.788 Latency(us) 00:24:47.788 Device Information : IOPS MiB/s Average min max 00:24:47.788 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40904.73 40806.63 41007.54 00:24:47.788 ======================================================== 00:24:47.788 Total : 25.00 0.10 40904.73 40806.63 41007.54 00:24:47.788 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.788 rmmod nvme_tcp 00:24:47.788 rmmod nvme_fabrics 00:24:47.788 rmmod nvme_keyring 00:24:47.788 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3716006 ']' 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3716006 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3716006 ']' 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3716006 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3716006 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3716006' 00:24:47.789 killing process with pid 3716006 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3716006 00:24:47.789 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3716006 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.163 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.065 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.065 00:24:51.065 real 0m11.252s 00:24:51.065 user 0m8.412s 00:24:51.065 sys 0m5.074s 00:24:51.065 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.065 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:51.065 ************************************ 00:24:51.065 END TEST nvmf_control_msg_list 00:24:51.065 ************************************ 00:24:51.065 12:27:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:51.065 12:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.065 12:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.065 12:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:51.065 ************************************ 00:24:51.065 START TEST nvmf_wait_for_buf 00:24:51.065 ************************************ 00:24:51.065 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:51.324 * Looking for test storage... 00:24:51.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:51.324 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:51.324 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:51.324 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:51.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.324 --rc genhtml_branch_coverage=1 00:24:51.324 --rc genhtml_function_coverage=1 00:24:51.324 --rc genhtml_legend=1 00:24:51.324 --rc geninfo_all_blocks=1 00:24:51.324 --rc geninfo_unexecuted_blocks=1 00:24:51.324 00:24:51.324 ' 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:51.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.324 --rc genhtml_branch_coverage=1 00:24:51.324 --rc genhtml_function_coverage=1 00:24:51.324 --rc genhtml_legend=1 00:24:51.324 --rc geninfo_all_blocks=1 00:24:51.324 --rc geninfo_unexecuted_blocks=1 00:24:51.324 00:24:51.324 ' 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:51.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.324 --rc genhtml_branch_coverage=1 00:24:51.324 --rc genhtml_function_coverage=1 00:24:51.324 --rc genhtml_legend=1 00:24:51.324 --rc geninfo_all_blocks=1 00:24:51.324 --rc geninfo_unexecuted_blocks=1 00:24:51.324 00:24:51.324 ' 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:51.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.324 --rc genhtml_branch_coverage=1 00:24:51.324 --rc genhtml_function_coverage=1 00:24:51.324 --rc genhtml_legend=1 00:24:51.324 --rc geninfo_all_blocks=1 00:24:51.324 --rc geninfo_unexecuted_blocks=1 00:24:51.324 00:24:51.324 ' 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.324 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.325 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:56.589 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:56.589 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:56.589 Found net devices under 0000:af:00.0: cvl_0_0 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:56.589 Found net devices under 0000:af:00.1: cvl_0_1 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.589 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:24:56.590 00:24:56.590 --- 10.0.0.2 ping statistics --- 00:24:56.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.590 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:24:56.590 00:24:56.590 --- 10.0.0.1 ping statistics --- 00:24:56.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.590 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3719813 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3719813 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3719813 ']' 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:56.590 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:56.590 [2024-12-10 12:28:02.981609] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:56.590 [2024-12-10 12:28:02.981698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.590 [2024-12-10 12:28:03.098063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.590 [2024-12-10 12:28:03.206239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.590 [2024-12-10 12:28:03.206281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.590 [2024-12-10 12:28:03.206292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.590 [2024-12-10 12:28:03.206302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.590 [2024-12-10 12:28:03.206311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.590 [2024-12-10 12:28:03.207816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.156 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.415 Malloc0 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.415 [2024-12-10 12:28:04.113006] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.415 [2024-12-10 12:28:04.137215] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.415 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.673 [2024-12-10 12:28:04.246315] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:59.049 Initializing NVMe Controllers 00:24:59.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:59.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:59.049 Initialization complete. Launching workers. 00:24:59.049 ======================================================== 00:24:59.049 Latency(us) 00:24:59.049 Device Information : IOPS MiB/s Average min max 00:24:59.049 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.54 16.07 32199.65 7164.90 63847.40 00:24:59.049 ======================================================== 00:24:59.049 Total : 128.54 16.07 32199.65 7164.90 63847.40 00:24:59.049 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.049 rmmod nvme_tcp 00:24:59.049 rmmod nvme_fabrics 00:24:59.049 rmmod nvme_keyring 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3719813 ']' 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3719813 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3719813 ']' 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3719813 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.049 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3719813 00:24:59.307 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.307 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.307 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3719813' 00:24:59.307 killing process with pid 3719813 00:24:59.307 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3719813 00:24:59.307 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3719813 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.242 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.770 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.770 00:25:02.770 real 0m11.124s 00:25:02.770 user 0m5.336s 00:25:02.770 sys 0m4.190s 00:25:02.770 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.770 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:02.771 ************************************ 00:25:02.771 END TEST nvmf_wait_for_buf 00:25:02.771 ************************************ 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:02.771 ************************************ 00:25:02.771 START TEST nvmf_fuzz 00:25:02.771 ************************************ 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:02.771 * Looking for test storage... 00:25:02.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:02.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.771 --rc genhtml_branch_coverage=1 00:25:02.771 --rc genhtml_function_coverage=1 00:25:02.771 --rc genhtml_legend=1 00:25:02.771 --rc geninfo_all_blocks=1 00:25:02.771 --rc geninfo_unexecuted_blocks=1 00:25:02.771 00:25:02.771 ' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:02.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.771 --rc genhtml_branch_coverage=1 00:25:02.771 --rc genhtml_function_coverage=1 00:25:02.771 --rc genhtml_legend=1 00:25:02.771 --rc geninfo_all_blocks=1 00:25:02.771 --rc geninfo_unexecuted_blocks=1 00:25:02.771 00:25:02.771 ' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:02.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.771 --rc genhtml_branch_coverage=1 00:25:02.771 --rc genhtml_function_coverage=1 00:25:02.771 --rc genhtml_legend=1 00:25:02.771 --rc geninfo_all_blocks=1 00:25:02.771 --rc geninfo_unexecuted_blocks=1 00:25:02.771 00:25:02.771 ' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:02.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.771 --rc genhtml_branch_coverage=1 00:25:02.771 --rc genhtml_function_coverage=1 00:25:02.771 --rc genhtml_legend=1 00:25:02.771 --rc geninfo_all_blocks=1 00:25:02.771 --rc geninfo_unexecuted_blocks=1 00:25:02.771 00:25:02.771 ' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.771 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.772 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.032 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:08.033 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:08.033 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:08.033 Found net devices under 0000:af:00.0: cvl_0_0 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:08.033 Found net devices under 0000:af:00.1: cvl_0_1 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:25:08.033 00:25:08.033 --- 10.0.0.2 ping statistics --- 00:25:08.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.033 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:25:08.033 00:25:08.033 --- 10.0.0.1 ping statistics --- 00:25:08.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.033 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3723919 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3723919 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3723919 ']' 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.033 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.598 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.855 Malloc0 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:08.855 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:40.909 Fuzzing completed. Shutting down the fuzz application 00:25:40.909 00:25:40.909 Dumping successful admin opcodes: 00:25:40.909 9, 10, 00:25:40.909 Dumping successful io opcodes: 00:25:40.909 0, 9, 00:25:40.909 NS: 0x2000008efec0 I/O qp, Total commands completed: 685662, total successful commands: 4004, random_seed: 3428314560 00:25:40.909 NS: 0x2000008efec0 admin qp, Total commands completed: 76720, total successful commands: 16, random_seed: 2804539904 00:25:40.909 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:41.167 Fuzzing completed. Shutting down the fuzz application 00:25:41.167 00:25:41.167 Dumping successful admin opcodes: 00:25:41.167 00:25:41.167 Dumping successful io opcodes: 00:25:41.167 00:25:41.167 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3985008884 00:25:41.167 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 3985111192 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.167 rmmod nvme_tcp 00:25:41.167 rmmod nvme_fabrics 00:25:41.167 rmmod nvme_keyring 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3723919 ']' 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3723919 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3723919 ']' 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3723919 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723919 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723919' 00:25:41.167 killing process with pid 3723919 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3723919 00:25:41.167 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3723919 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.539 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:45.067 00:25:45.067 real 0m42.247s 00:25:45.067 user 0m57.045s 00:25:45.067 sys 0m15.687s 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.067 ************************************ 00:25:45.067 END TEST nvmf_fuzz 00:25:45.067 ************************************ 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:45.067 ************************************ 00:25:45.067 START TEST nvmf_multiconnection 00:25:45.067 ************************************ 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:45.067 * Looking for test storage... 00:25:45.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:45.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.067 --rc genhtml_branch_coverage=1 00:25:45.067 --rc genhtml_function_coverage=1 00:25:45.067 --rc genhtml_legend=1 00:25:45.067 --rc geninfo_all_blocks=1 00:25:45.067 --rc geninfo_unexecuted_blocks=1 00:25:45.067 00:25:45.067 ' 00:25:45.067 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:45.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.067 --rc genhtml_branch_coverage=1 00:25:45.067 --rc genhtml_function_coverage=1 00:25:45.067 --rc genhtml_legend=1 00:25:45.067 --rc geninfo_all_blocks=1 00:25:45.067 --rc geninfo_unexecuted_blocks=1 00:25:45.067 00:25:45.068 ' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.068 --rc genhtml_branch_coverage=1 00:25:45.068 --rc genhtml_function_coverage=1 00:25:45.068 --rc genhtml_legend=1 00:25:45.068 --rc geninfo_all_blocks=1 00:25:45.068 --rc geninfo_unexecuted_blocks=1 00:25:45.068 00:25:45.068 ' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.068 --rc genhtml_branch_coverage=1 00:25:45.068 --rc genhtml_function_coverage=1 00:25:45.068 --rc genhtml_legend=1 00:25:45.068 --rc geninfo_all_blocks=1 00:25:45.068 --rc geninfo_unexecuted_blocks=1 00:25:45.068 00:25:45.068 ' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:45.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:45.068 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:50.437 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.437 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:50.438 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:50.438 Found net devices under 0000:af:00.0: cvl_0_0 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:50.438 Found net devices under 0000:af:00.1: cvl_0_1 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:25:50.438 00:25:50.438 --- 10.0.0.2 ping statistics --- 00:25:50.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.438 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:25:50.438 00:25:50.438 --- 10.0.0.1 ping statistics --- 00:25:50.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.438 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3732741 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3732741 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3732741 ']' 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.438 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.438 [2024-12-10 12:28:56.635365] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:25:50.438 [2024-12-10 12:28:56.635454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.438 [2024-12-10 12:28:56.752316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.438 [2024-12-10 12:28:56.854798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.438 [2024-12-10 12:28:56.854840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.438 [2024-12-10 12:28:56.854851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.438 [2024-12-10 12:28:56.854862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.438 [2024-12-10 12:28:56.854870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.438 [2024-12-10 12:28:56.857342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.438 [2024-12-10 12:28:56.857419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.438 [2024-12-10 12:28:56.857440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.438 [2024-12-10 12:28:56.857433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.697 [2024-12-10 12:28:57.486574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.697 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 Malloc1 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 [2024-12-10 12:28:57.622803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 Malloc2 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.955 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.213 Malloc3 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.213 Malloc4 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:51.213 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.214 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.214 Malloc5 00:25:51.214 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.214 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:51.214 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.214 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.214 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.214 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:51.214 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.214 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 Malloc6 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 Malloc7 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.473 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 Malloc8 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 Malloc9 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 Malloc10 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:51.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.733 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.991 Malloc11 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.991 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:53.364 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:53.364 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:53.364 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.365 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:53.365 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:55.262 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:55.262 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:55.262 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:55.262 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:55.262 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.262 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:55.262 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.263 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:56.196 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:56.196 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:56.196 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.196 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:56.196 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:58.093 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:58.093 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:58.093 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:58.094 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:58.094 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.094 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:58.094 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.094 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:59.466 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:59.466 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:59.466 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.466 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:59.466 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:01.994 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:01.994 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:01.994 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:01.994 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:01.994 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.994 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:01.994 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.994 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:02.928 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:02.928 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:02.928 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.928 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:02.928 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:04.827 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:04.827 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:04.827 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:04.827 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:04.827 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.827 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:04.827 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.827 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:06.200 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:06.200 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:06.200 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.200 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:06.200 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:08.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:08.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:08.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:08.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:08.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:08.098 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:08.098 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.098 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:09.471 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:09.471 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:09.471 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.471 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:09.471 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:11.368 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:11.368 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:11.368 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:11.368 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:11.368 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.368 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:11.368 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.368 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:12.741 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:12.741 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:12.741 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.741 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:12.741 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:15.267 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:15.267 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:15.267 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:15.267 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:15.267 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.267 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:15.267 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.267 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:16.201 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:16.201 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:16.201 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:16.201 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:16.201 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:18.100 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:18.100 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:18.100 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:18.100 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:18.100 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.100 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:18.100 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.100 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:19.472 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:19.472 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:19.472 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.472 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:19.472 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.999 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.999 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.999 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:21.999 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.999 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.999 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.999 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.999 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:23.372 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:23.372 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:23.372 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:23.372 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:23.372 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:25.269 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:25.270 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:25.270 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:25.270 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:25.270 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:25.270 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:25.270 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.270 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:26.642 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:26.642 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:26.642 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:26.642 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:26.642 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:28.541 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:28.541 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:28.541 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:28.541 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:28.541 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:28.541 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:28.541 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:28.541 [global] 00:26:28.541 thread=1 00:26:28.541 invalidate=1 00:26:28.541 rw=read 00:26:28.541 time_based=1 00:26:28.541 runtime=10 00:26:28.541 ioengine=libaio 00:26:28.541 direct=1 00:26:28.541 bs=262144 00:26:28.541 iodepth=64 00:26:28.541 norandommap=1 00:26:28.541 numjobs=1 00:26:28.541 00:26:28.541 [job0] 00:26:28.541 filename=/dev/nvme0n1 00:26:28.541 [job1] 00:26:28.541 filename=/dev/nvme10n1 00:26:28.541 [job2] 00:26:28.541 filename=/dev/nvme1n1 00:26:28.541 [job3] 00:26:28.541 filename=/dev/nvme2n1 00:26:28.798 [job4] 00:26:28.798 filename=/dev/nvme3n1 00:26:28.798 [job5] 00:26:28.798 filename=/dev/nvme4n1 00:26:28.798 [job6] 00:26:28.798 filename=/dev/nvme5n1 00:26:28.798 [job7] 00:26:28.798 filename=/dev/nvme6n1 00:26:28.798 [job8] 00:26:28.798 filename=/dev/nvme7n1 00:26:28.798 [job9] 00:26:28.798 filename=/dev/nvme8n1 00:26:28.798 [job10] 00:26:28.798 filename=/dev/nvme9n1 00:26:28.798 Could not set queue depth (nvme0n1) 00:26:28.798 Could not set queue depth (nvme10n1) 00:26:28.798 Could not set queue depth (nvme1n1) 00:26:28.798 Could not set queue depth (nvme2n1) 00:26:28.798 Could not set queue depth (nvme3n1) 00:26:28.798 Could not set queue depth (nvme4n1) 00:26:28.798 Could not set queue depth (nvme5n1) 00:26:28.798 Could not set queue depth (nvme6n1) 00:26:28.798 Could not set queue depth (nvme7n1) 00:26:28.798 Could not set queue depth (nvme8n1) 00:26:28.798 Could not set queue depth (nvme9n1) 00:26:29.054 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.054 fio-3.35 00:26:29.054 Starting 11 threads 00:26:41.251 00:26:41.251 job0: (groupid=0, jobs=1): err= 0: pid=3739278: Tue Dec 10 12:29:46 2024 00:26:41.251 read: IOPS=398, BW=99.7MiB/s (105MB/s)(1009MiB/10116msec) 00:26:41.251 slat (usec): min=9, max=181461, avg=2476.02, stdev=11322.04 00:26:41.251 clat (msec): min=16, max=760, avg=157.82, stdev=168.67 00:26:41.251 lat (msec): min=16, max=761, avg=160.30, stdev=171.22 00:26:41.251 clat percentiles (msec): 00:26:41.251 | 1.00th=[ 25], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 33], 00:26:41.251 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 78], 60.00th=[ 109], 00:26:41.251 | 70.00th=[ 161], 80.00th=[ 239], 90.00th=[ 485], 95.00th=[ 567], 00:26:41.251 | 99.00th=[ 634], 99.50th=[ 667], 99.90th=[ 684], 99.95th=[ 718], 00:26:41.251 | 99.99th=[ 760] 00:26:41.251 bw ( KiB/s): min=23552, max=382464, per=13.49%, avg=101657.60, stdev=106029.55, samples=20 00:26:41.251 iops : min= 92, max= 1494, avg=397.10, stdev=414.18, samples=20 00:26:41.251 lat (msec) : 20=0.02%, 50=26.15%, 100=32.15%, 250=22.91%, 500=10.11% 00:26:41.251 lat (msec) : 750=8.63%, 1000=0.02% 00:26:41.251 cpu : usr=0.12%, sys=1.60%, ctx=588, majf=0, minf=4097 00:26:41.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:41.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.251 issued rwts: total=4034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.251 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.251 job1: (groupid=0, jobs=1): err= 0: pid=3739286: Tue Dec 10 12:29:46 2024 00:26:41.251 read: IOPS=358, BW=89.7MiB/s (94.1MB/s)(908MiB/10114msec) 00:26:41.251 slat (usec): min=16, max=193323, avg=1682.01, stdev=8539.81 00:26:41.251 clat (usec): min=699, max=794763, avg=176452.77, stdev=153700.39 00:26:41.251 lat (usec): min=727, max=794796, avg=178134.78, stdev=154858.27 00:26:41.251 clat percentiles (usec): 00:26:41.251 | 1.00th=[ 922], 5.00th=[ 1156], 10.00th=[ 4424], 20.00th=[ 32900], 00:26:41.251 | 30.00th=[ 58459], 40.00th=[105382], 50.00th=[147850], 60.00th=[206570], 00:26:41.251 | 70.00th=[244319], 80.00th=[299893], 90.00th=[383779], 95.00th=[446694], 00:26:41.251 | 99.00th=[734004], 99.50th=[750781], 99.90th=[792724], 99.95th=[792724], 00:26:41.251 | 99.99th=[792724] 00:26:41.251 bw ( KiB/s): min=33792, max=262144, per=12.11%, avg=91293.25, stdev=61390.34, samples=20 00:26:41.251 iops : min= 132, max= 1024, avg=356.60, stdev=239.82, samples=20 00:26:41.251 lat (usec) : 750=0.11%, 1000=1.90% 00:26:41.251 lat (msec) : 2=4.27%, 4=3.58%, 10=4.90%, 20=1.13%, 50=8.57% 00:26:41.251 lat (msec) : 100=15.07%, 250=31.82%, 500=25.70%, 750=2.45%, 1000=0.50% 00:26:41.251 cpu : usr=0.18%, sys=1.26%, ctx=1462, majf=0, minf=4097 00:26:41.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:41.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.251 issued rwts: total=3630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.251 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.251 job2: (groupid=0, jobs=1): err= 0: pid=3739300: Tue Dec 10 12:29:46 2024 00:26:41.251 read: IOPS=167, BW=41.8MiB/s (43.8MB/s)(424MiB/10140msec) 00:26:41.251 slat (usec): min=14, max=202084, avg=3802.31, stdev=18842.61 00:26:41.251 clat (usec): min=1848, max=983618, avg=378649.28, stdev=258816.54 00:26:41.251 lat (usec): min=1895, max=983651, avg=382451.59, stdev=261368.35 00:26:41.251 clat percentiles (msec): 00:26:41.251 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 88], 00:26:41.251 | 30.00th=[ 148], 40.00th=[ 279], 50.00th=[ 422], 60.00th=[ 506], 00:26:41.251 | 70.00th=[ 558], 80.00th=[ 617], 90.00th=[ 718], 95.00th=[ 768], 00:26:41.251 | 99.00th=[ 885], 99.50th=[ 902], 99.90th=[ 986], 99.95th=[ 986], 00:26:41.251 | 99.99th=[ 986] 00:26:41.251 bw ( KiB/s): min=16384, max=112128, per=5.54%, avg=41753.60, stdev=25517.66, samples=20 00:26:41.251 iops : min= 64, max= 438, avg=163.10, stdev=99.68, samples=20 00:26:41.251 lat (msec) : 2=0.24%, 4=0.71%, 10=8.85%, 20=1.53%, 50=2.42% 00:26:41.251 lat (msec) : 100=8.26%, 250=16.05%, 500=21.06%, 750=34.57%, 1000=6.31% 00:26:41.251 cpu : usr=0.04%, sys=0.68%, ctx=457, majf=0, minf=4097 00:26:41.251 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:26:41.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.251 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.251 issued rwts: total=1695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.251 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.251 job3: (groupid=0, jobs=1): err= 0: pid=3739310: Tue Dec 10 12:29:46 2024 00:26:41.251 read: IOPS=262, BW=65.7MiB/s (68.9MB/s)(664MiB/10107msec) 00:26:41.251 slat (usec): min=14, max=164874, avg=3150.08, stdev=12592.96 00:26:41.251 clat (msec): min=19, max=801, avg=239.97, stdev=157.51 00:26:41.251 lat (msec): min=19, max=801, avg=243.12, stdev=159.50 00:26:41.251 clat percentiles (msec): 00:26:41.251 | 1.00th=[ 46], 5.00th=[ 65], 10.00th=[ 90], 20.00th=[ 128], 00:26:41.251 | 30.00th=[ 146], 40.00th=[ 159], 50.00th=[ 176], 60.00th=[ 213], 00:26:41.251 | 70.00th=[ 271], 80.00th=[ 368], 90.00th=[ 506], 95.00th=[ 592], 00:26:41.251 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 768], 99.95th=[ 802], 00:26:41.251 | 99.99th=[ 802] 00:26:41.251 bw ( KiB/s): min=23040, max=171520, per=8.81%, avg=66380.80, stdev=39135.19, samples=20 00:26:41.251 iops : min= 90, max= 670, avg=259.30, stdev=152.87, samples=20 00:26:41.251 lat (msec) : 20=0.04%, 50=2.11%, 100=9.03%, 250=56.27%, 500=22.17% 00:26:41.251 lat (msec) : 750=10.20%, 1000=0.19% 00:26:41.251 cpu : usr=0.11%, sys=1.21%, ctx=452, majf=0, minf=4097 00:26:41.251 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:41.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.251 issued rwts: total=2657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.251 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.251 job4: (groupid=0, jobs=1): err= 0: pid=3739316: Tue Dec 10 12:29:46 2024 00:26:41.251 read: IOPS=377, BW=94.3MiB/s (98.9MB/s)(954MiB/10111msec) 00:26:41.251 slat (usec): min=10, max=212176, avg=2067.40, stdev=9547.56 00:26:41.251 clat (usec): min=1658, max=820874, avg=167398.42, stdev=144553.92 00:26:41.251 lat (usec): min=1713, max=820926, avg=169465.81, stdev=145854.73 00:26:41.251 clat percentiles (msec): 00:26:41.251 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 52], 20.00th=[ 67], 00:26:41.251 | 30.00th=[ 77], 40.00th=[ 87], 50.00th=[ 109], 60.00th=[ 148], 00:26:41.251 | 70.00th=[ 192], 80.00th=[ 243], 90.00th=[ 393], 95.00th=[ 481], 00:26:41.251 | 99.00th=[ 667], 99.50th=[ 768], 99.90th=[ 818], 99.95th=[ 818], 00:26:41.251 | 99.99th=[ 818] 00:26:41.251 bw ( KiB/s): min=22016, max=242688, per=12.74%, avg=96025.60, stdev=66125.24, samples=20 00:26:41.251 iops : min= 86, max= 948, avg=375.10, stdev=258.30, samples=20 00:26:41.251 lat (msec) : 2=0.05%, 4=0.10%, 10=2.20%, 20=2.52%, 50=4.77% 00:26:41.251 lat (msec) : 100=35.47%, 250=35.61%, 500=14.74%, 750=3.88%, 1000=0.66% 00:26:41.251 cpu : usr=0.15%, sys=1.56%, ctx=891, majf=0, minf=4097 00:26:41.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:41.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.251 issued rwts: total=3814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.251 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.251 job5: (groupid=0, jobs=1): err= 0: pid=3739338: Tue Dec 10 12:29:46 2024 00:26:41.251 read: IOPS=199, BW=49.9MiB/s (52.3MB/s)(503MiB/10071msec) 00:26:41.251 slat (usec): min=23, max=196696, avg=2619.81, stdev=12720.91 00:26:41.251 clat (usec): min=1518, max=734394, avg=317660.09, stdev=161802.00 00:26:41.251 lat (usec): min=1555, max=834250, avg=320279.91, stdev=163299.37 00:26:41.251 clat percentiles (msec): 00:26:41.251 | 1.00th=[ 4], 5.00th=[ 79], 10.00th=[ 106], 20.00th=[ 188], 00:26:41.251 | 30.00th=[ 228], 40.00th=[ 262], 50.00th=[ 309], 60.00th=[ 342], 00:26:41.251 | 70.00th=[ 393], 80.00th=[ 472], 90.00th=[ 550], 95.00th=[ 609], 00:26:41.251 | 99.00th=[ 667], 99.50th=[ 701], 99.90th=[ 726], 99.95th=[ 726], 00:26:41.251 | 99.99th=[ 735] 00:26:41.251 bw ( KiB/s): min=23086, max=104448, per=6.61%, avg=49845.50, stdev=22694.32, samples=20 00:26:41.251 iops : min= 90, max= 408, avg=194.70, stdev=88.66, samples=20 00:26:41.251 lat (msec) : 2=0.05%, 4=1.24%, 10=1.09%, 20=0.30%, 50=1.79% 00:26:41.251 lat (msec) : 100=4.88%, 250=27.96%, 500=48.06%, 750=14.63% 00:26:41.251 cpu : usr=0.05%, sys=0.86%, ctx=623, majf=0, minf=4097 00:26:41.251 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:41.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.251 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.252 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.252 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.252 job6: (groupid=0, jobs=1): err= 0: pid=3739348: Tue Dec 10 12:29:46 2024 00:26:41.252 read: IOPS=259, BW=64.8MiB/s (68.0MB/s)(655MiB/10105msec) 00:26:41.252 slat (usec): min=15, max=394935, avg=1142.50, stdev=10816.81 00:26:41.252 clat (usec): min=769, max=906321, avg=245423.51, stdev=205262.35 00:26:41.252 lat (usec): min=798, max=982358, avg=246566.02, stdev=206279.24 00:26:41.252 clat percentiles (msec): 00:26:41.252 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 8], 20.00th=[ 39], 00:26:41.252 | 30.00th=[ 91], 40.00th=[ 150], 50.00th=[ 209], 60.00th=[ 266], 00:26:41.252 | 70.00th=[ 342], 80.00th=[ 439], 90.00th=[ 542], 95.00th=[ 625], 00:26:41.252 | 99.00th=[ 810], 99.50th=[ 844], 99.90th=[ 911], 99.95th=[ 911], 00:26:41.252 | 99.99th=[ 911] 00:26:41.252 bw ( KiB/s): min=16896, max=178688, per=8.68%, avg=65435.70, stdev=47601.79, samples=20 00:26:41.252 iops : min= 66, max= 698, avg=255.60, stdev=185.95, samples=20 00:26:41.252 lat (usec) : 1000=0.04% 00:26:41.252 lat (msec) : 2=0.61%, 4=4.92%, 10=6.34%, 20=4.66%, 50=5.42% 00:26:41.252 lat (msec) : 100=9.92%, 250=25.76%, 500=29.35%, 750=11.11%, 1000=1.87% 00:26:41.252 cpu : usr=0.12%, sys=1.00%, ctx=928, majf=0, minf=4097 00:26:41.252 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:41.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.252 issued rwts: total=2620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.252 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.252 job7: (groupid=0, jobs=1): err= 0: pid=3739357: Tue Dec 10 12:29:46 2024 00:26:41.252 read: IOPS=242, BW=60.6MiB/s (63.5MB/s)(614MiB/10140msec) 00:26:41.252 slat (usec): min=15, max=168640, avg=4071.03, stdev=15128.63 00:26:41.252 clat (msec): min=15, max=733, avg=259.85, stdev=173.18 00:26:41.252 lat (msec): min=15, max=777, avg=263.92, stdev=175.80 00:26:41.252 clat percentiles (msec): 00:26:41.252 | 1.00th=[ 50], 5.00th=[ 68], 10.00th=[ 106], 20.00th=[ 126], 00:26:41.252 | 30.00th=[ 138], 40.00th=[ 153], 50.00th=[ 169], 60.00th=[ 239], 00:26:41.252 | 70.00th=[ 338], 80.00th=[ 439], 90.00th=[ 542], 95.00th=[ 592], 00:26:41.252 | 99.00th=[ 667], 99.50th=[ 684], 99.90th=[ 718], 99.95th=[ 718], 00:26:41.252 | 99.99th=[ 735] 00:26:41.252 bw ( KiB/s): min=23552, max=157184, per=8.13%, avg=61235.20, stdev=41425.03, samples=20 00:26:41.252 iops : min= 92, max= 614, avg=239.20, stdev=161.82, samples=20 00:26:41.252 lat (msec) : 20=0.04%, 50=1.63%, 100=6.72%, 250=52.04%, 500=23.70% 00:26:41.252 lat (msec) : 750=15.88% 00:26:41.252 cpu : usr=0.11%, sys=1.05%, ctx=333, majf=0, minf=3722 00:26:41.252 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:41.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.252 issued rwts: total=2456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.252 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.252 job8: (groupid=0, jobs=1): err= 0: pid=3739382: Tue Dec 10 12:29:46 2024 00:26:41.252 read: IOPS=229, BW=57.5MiB/s (60.3MB/s)(582MiB/10128msec) 00:26:41.252 slat (usec): min=10, max=454673, avg=3537.37, stdev=16621.60 00:26:41.252 clat (msec): min=2, max=883, avg=274.61, stdev=173.52 00:26:41.252 lat (msec): min=2, max=883, avg=278.14, stdev=175.62 00:26:41.252 clat percentiles (msec): 00:26:41.252 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 54], 20.00th=[ 130], 00:26:41.252 | 30.00th=[ 171], 40.00th=[ 211], 50.00th=[ 243], 60.00th=[ 300], 00:26:41.252 | 70.00th=[ 363], 80.00th=[ 443], 90.00th=[ 506], 95.00th=[ 550], 00:26:41.252 | 99.00th=[ 885], 99.50th=[ 885], 99.90th=[ 885], 99.95th=[ 885], 00:26:41.252 | 99.99th=[ 885] 00:26:41.252 bw ( KiB/s): min=31807, max=103936, per=7.69%, avg=57961.55, stdev=22703.88, samples=20 00:26:41.252 iops : min= 124, max= 406, avg=226.40, stdev=88.70, samples=20 00:26:41.252 lat (msec) : 4=1.12%, 10=2.45%, 20=2.36%, 50=3.26%, 100=8.46% 00:26:41.252 lat (msec) : 250=33.42%, 500=38.57%, 750=9.24%, 1000=1.12% 00:26:41.252 cpu : usr=0.13%, sys=1.03%, ctx=450, majf=0, minf=4097 00:26:41.252 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:41.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.252 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.252 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.252 job9: (groupid=0, jobs=1): err= 0: pid=3739392: Tue Dec 10 12:29:46 2024 00:26:41.252 read: IOPS=309, BW=77.4MiB/s (81.1MB/s)(785MiB/10137msec) 00:26:41.252 slat (usec): min=16, max=298605, avg=2107.93, stdev=12026.79 00:26:41.252 clat (usec): min=1185, max=863147, avg=204361.50, stdev=180332.98 00:26:41.252 lat (usec): min=1213, max=863180, avg=206469.43, stdev=181552.56 00:26:41.252 clat percentiles (usec): 00:26:41.252 | 1.00th=[ 1385], 5.00th=[ 3130], 10.00th=[ 7242], 20.00th=[ 46400], 00:26:41.252 | 30.00th=[ 80217], 40.00th=[127402], 50.00th=[166724], 60.00th=[198181], 00:26:41.252 | 70.00th=[238027], 80.00th=[337642], 90.00th=[467665], 95.00th=[583009], 00:26:41.252 | 99.00th=[792724], 99.50th=[826278], 99.90th=[843056], 99.95th=[851444], 00:26:41.252 | 99.99th=[859833] 00:26:41.252 bw ( KiB/s): min=22528, max=185856, per=10.44%, avg=78694.40, stdev=49808.14, samples=20 00:26:41.252 iops : min= 88, max= 726, avg=307.40, stdev=194.56, samples=20 00:26:41.252 lat (msec) : 2=3.35%, 4=2.64%, 10=6.21%, 20=2.58%, 50=5.45% 00:26:41.252 lat (msec) : 100=13.35%, 250=37.86%, 500=20.52%, 750=6.63%, 1000=1.40% 00:26:41.252 cpu : usr=0.12%, sys=1.23%, ctx=973, majf=0, minf=4097 00:26:41.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:41.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.252 issued rwts: total=3138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.252 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.252 job10: (groupid=0, jobs=1): err= 0: pid=3739396: Tue Dec 10 12:29:46 2024 00:26:41.252 read: IOPS=144, BW=36.2MiB/s (37.9MB/s)(367MiB/10134msec) 00:26:41.252 slat (usec): min=17, max=295237, avg=5771.32, stdev=23312.09 00:26:41.252 clat (msec): min=22, max=784, avg=436.16, stdev=160.26 00:26:41.252 lat (msec): min=22, max=784, avg=441.93, stdev=162.73 00:26:41.252 clat percentiles (msec): 00:26:41.252 | 1.00th=[ 43], 5.00th=[ 107], 10.00th=[ 199], 20.00th=[ 317], 00:26:41.252 | 30.00th=[ 376], 40.00th=[ 409], 50.00th=[ 447], 60.00th=[ 502], 00:26:41.252 | 70.00th=[ 550], 80.00th=[ 584], 90.00th=[ 625], 95.00th=[ 651], 00:26:41.252 | 99.00th=[ 743], 99.50th=[ 768], 99.90th=[ 776], 99.95th=[ 785], 00:26:41.252 | 99.99th=[ 785] 00:26:41.252 bw ( KiB/s): min=20992, max=56320, per=4.76%, avg=35895.30, stdev=10507.99, samples=20 00:26:41.252 iops : min= 82, max= 220, avg=140.20, stdev=41.04, samples=20 00:26:41.252 lat (msec) : 50=1.64%, 100=3.00%, 250=10.23%, 500=44.54%, 750=39.70% 00:26:41.252 lat (msec) : 1000=0.89% 00:26:41.252 cpu : usr=0.04%, sys=0.67%, ctx=277, majf=0, minf=4097 00:26:41.252 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:26:41.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.252 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.252 issued rwts: total=1466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.252 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.252 00:26:41.252 Run status group 0 (all jobs): 00:26:41.252 READ: bw=736MiB/s (772MB/s), 36.2MiB/s-99.7MiB/s (37.9MB/s-105MB/s), io=7462MiB (7824MB), run=10071-10140msec 00:26:41.252 00:26:41.252 Disk stats (read/write): 00:26:41.252 nvme0n1: ios=7891/0, merge=0/0, ticks=1236009/0, in_queue=1236009, util=97.16% 00:26:41.252 nvme10n1: ios=7080/0, merge=0/0, ticks=1232113/0, in_queue=1232113, util=97.33% 00:26:41.252 nvme1n1: ios=3267/0, merge=0/0, ticks=1198559/0, in_queue=1198559, util=97.65% 00:26:41.252 nvme2n1: ios=5160/0, merge=0/0, ticks=1228111/0, in_queue=1228111, util=97.79% 00:26:41.252 nvme3n1: ios=7454/0, merge=0/0, ticks=1233631/0, in_queue=1233631, util=97.87% 00:26:41.252 nvme4n1: ios=3832/0, merge=0/0, ticks=1230291/0, in_queue=1230291, util=98.23% 00:26:41.252 nvme5n1: ios=5078/0, merge=0/0, ticks=1241594/0, in_queue=1241594, util=98.38% 00:26:41.252 nvme6n1: ios=4749/0, merge=0/0, ticks=1197692/0, in_queue=1197692, util=98.53% 00:26:41.252 nvme7n1: ios=4508/0, merge=0/0, ticks=1207312/0, in_queue=1207312, util=98.90% 00:26:41.252 nvme8n1: ios=6134/0, merge=0/0, ticks=1211789/0, in_queue=1211789, util=99.10% 00:26:41.252 nvme9n1: ios=2779/0, merge=0/0, ticks=1200268/0, in_queue=1200268, util=99.24% 00:26:41.252 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:41.252 [global] 00:26:41.252 thread=1 00:26:41.252 invalidate=1 00:26:41.252 rw=randwrite 00:26:41.252 time_based=1 00:26:41.252 runtime=10 00:26:41.252 ioengine=libaio 00:26:41.252 direct=1 00:26:41.252 bs=262144 00:26:41.252 iodepth=64 00:26:41.252 norandommap=1 00:26:41.252 numjobs=1 00:26:41.252 00:26:41.252 [job0] 00:26:41.252 filename=/dev/nvme0n1 00:26:41.252 [job1] 00:26:41.252 filename=/dev/nvme10n1 00:26:41.252 [job2] 00:26:41.252 filename=/dev/nvme1n1 00:26:41.252 [job3] 00:26:41.252 filename=/dev/nvme2n1 00:26:41.252 [job4] 00:26:41.252 filename=/dev/nvme3n1 00:26:41.252 [job5] 00:26:41.252 filename=/dev/nvme4n1 00:26:41.252 [job6] 00:26:41.252 filename=/dev/nvme5n1 00:26:41.252 [job7] 00:26:41.252 filename=/dev/nvme6n1 00:26:41.252 [job8] 00:26:41.252 filename=/dev/nvme7n1 00:26:41.252 [job9] 00:26:41.252 filename=/dev/nvme8n1 00:26:41.252 [job10] 00:26:41.252 filename=/dev/nvme9n1 00:26:41.252 Could not set queue depth (nvme0n1) 00:26:41.252 Could not set queue depth (nvme10n1) 00:26:41.252 Could not set queue depth (nvme1n1) 00:26:41.252 Could not set queue depth (nvme2n1) 00:26:41.252 Could not set queue depth (nvme3n1) 00:26:41.252 Could not set queue depth (nvme4n1) 00:26:41.252 Could not set queue depth (nvme5n1) 00:26:41.252 Could not set queue depth (nvme6n1) 00:26:41.252 Could not set queue depth (nvme7n1) 00:26:41.252 Could not set queue depth (nvme8n1) 00:26:41.252 Could not set queue depth (nvme9n1) 00:26:41.252 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.252 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.253 fio-3.35 00:26:41.253 Starting 11 threads 00:26:51.225 00:26:51.225 job0: (groupid=0, jobs=1): err= 0: pid=3740532: Tue Dec 10 12:29:57 2024 00:26:51.225 write: IOPS=432, BW=108MiB/s (113MB/s)(1100MiB/10173msec); 0 zone resets 00:26:51.225 slat (usec): min=26, max=115841, avg=1754.82, stdev=5450.20 00:26:51.225 clat (usec): min=956, max=666871, avg=146120.87, stdev=115822.01 00:26:51.225 lat (usec): min=997, max=701439, avg=147875.69, stdev=117280.70 00:26:51.225 clat percentiles (msec): 00:26:51.225 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 20], 20.00th=[ 53], 00:26:51.225 | 30.00th=[ 66], 40.00th=[ 86], 50.00th=[ 126], 60.00th=[ 165], 00:26:51.225 | 70.00th=[ 207], 80.00th=[ 218], 90.00th=[ 284], 95.00th=[ 376], 00:26:51.225 | 99.00th=[ 535], 99.50th=[ 567], 99.90th=[ 634], 99.95th=[ 667], 00:26:51.225 | 99.99th=[ 667] 00:26:51.225 bw ( KiB/s): min=30208, max=241152, per=13.95%, avg=111027.20, stdev=59999.22, samples=20 00:26:51.225 iops : min= 118, max= 942, avg=433.70, stdev=234.37, samples=20 00:26:51.225 lat (usec) : 1000=0.05% 00:26:51.225 lat (msec) : 2=0.41%, 4=0.98%, 10=4.32%, 20=4.43%, 50=8.00% 00:26:51.225 lat (msec) : 100=26.50%, 250=43.23%, 500=10.09%, 750=2.00% 00:26:51.225 cpu : usr=0.94%, sys=1.41%, ctx=2273, majf=0, minf=1 00:26:51.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:51.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.225 issued rwts: total=0,4400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.225 job1: (groupid=0, jobs=1): err= 0: pid=3740535: Tue Dec 10 12:29:57 2024 00:26:51.225 write: IOPS=299, BW=74.8MiB/s (78.4MB/s)(766MiB/10236msec); 0 zone resets 00:26:51.225 slat (usec): min=21, max=64795, avg=2786.79, stdev=6993.55 00:26:51.225 clat (usec): min=1462, max=540873, avg=211059.42, stdev=144003.55 00:26:51.225 lat (usec): min=1537, max=542135, avg=213846.22, stdev=145997.82 00:26:51.225 clat percentiles (msec): 00:26:51.225 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 31], 20.00th=[ 80], 00:26:51.225 | 30.00th=[ 133], 40.00th=[ 148], 50.00th=[ 176], 60.00th=[ 205], 00:26:51.225 | 70.00th=[ 275], 80.00th=[ 351], 90.00th=[ 451], 95.00th=[ 468], 00:26:51.225 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 531], 99.95th=[ 542], 00:26:51.225 | 99.99th=[ 542] 00:26:51.225 bw ( KiB/s): min=32768, max=257024, per=9.64%, avg=76748.80, stdev=50849.81, samples=20 00:26:51.225 iops : min= 128, max= 1004, avg=299.80, stdev=198.63, samples=20 00:26:51.225 lat (msec) : 2=0.13%, 4=0.78%, 10=1.93%, 20=4.96%, 50=6.30% 00:26:51.225 lat (msec) : 100=9.31%, 250=44.48%, 500=30.63%, 750=1.47% 00:26:51.225 cpu : usr=0.90%, sys=0.95%, ctx=1365, majf=0, minf=1 00:26:51.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:51.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.225 issued rwts: total=0,3062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.225 job2: (groupid=0, jobs=1): err= 0: pid=3740545: Tue Dec 10 12:29:57 2024 00:26:51.225 write: IOPS=287, BW=72.0MiB/s (75.5MB/s)(736MiB/10231msec); 0 zone resets 00:26:51.225 slat (usec): min=24, max=84106, avg=3300.79, stdev=7778.89 00:26:51.225 clat (usec): min=966, max=697488, avg=218933.12, stdev=152713.70 00:26:51.225 lat (usec): min=1032, max=697538, avg=222233.91, stdev=154912.57 00:26:51.225 clat percentiles (msec): 00:26:51.225 | 1.00th=[ 7], 5.00th=[ 42], 10.00th=[ 61], 20.00th=[ 83], 00:26:51.225 | 30.00th=[ 104], 40.00th=[ 134], 50.00th=[ 171], 60.00th=[ 220], 00:26:51.225 | 70.00th=[ 300], 80.00th=[ 409], 90.00th=[ 447], 95.00th=[ 468], 00:26:51.225 | 99.00th=[ 592], 99.50th=[ 600], 99.90th=[ 667], 99.95th=[ 701], 00:26:51.225 | 99.99th=[ 701] 00:26:51.225 bw ( KiB/s): min=26624, max=215040, per=9.27%, avg=73762.60, stdev=56300.83, samples=20 00:26:51.225 iops : min= 104, max= 840, avg=288.10, stdev=219.91, samples=20 00:26:51.225 lat (usec) : 1000=0.03% 00:26:51.225 lat (msec) : 2=0.20%, 4=0.41%, 10=0.95%, 20=1.43%, 50=2.68% 00:26:51.225 lat (msec) : 100=20.07%, 250=38.20%, 500=32.90%, 750=3.12% 00:26:51.225 cpu : usr=0.75%, sys=0.96%, ctx=979, majf=0, minf=1 00:26:51.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:51.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.225 issued rwts: total=0,2945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.225 job3: (groupid=0, jobs=1): err= 0: pid=3740546: Tue Dec 10 12:29:57 2024 00:26:51.225 write: IOPS=299, BW=74.8MiB/s (78.4MB/s)(755MiB/10093msec); 0 zone resets 00:26:51.225 slat (usec): min=24, max=133897, avg=2667.74, stdev=8439.07 00:26:51.225 clat (usec): min=939, max=606661, avg=211089.51, stdev=166167.97 00:26:51.225 lat (usec): min=983, max=606723, avg=213757.25, stdev=168263.23 00:26:51.225 clat percentiles (msec): 00:26:51.225 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 58], 20.00th=[ 90], 00:26:51.225 | 30.00th=[ 110], 40.00th=[ 134], 50.00th=[ 144], 60.00th=[ 174], 00:26:51.225 | 70.00th=[ 211], 80.00th=[ 435], 90.00th=[ 510], 95.00th=[ 523], 00:26:51.225 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 584], 99.95th=[ 584], 00:26:51.225 | 99.99th=[ 609] 00:26:51.225 bw ( KiB/s): min=28672, max=157184, per=9.51%, avg=75699.20, stdev=44267.71, samples=20 00:26:51.225 iops : min= 112, max= 614, avg=295.70, stdev=172.92, samples=20 00:26:51.225 lat (usec) : 1000=0.07% 00:26:51.225 lat (msec) : 2=0.46%, 4=1.49%, 10=3.38%, 20=1.62%, 50=2.45% 00:26:51.225 lat (msec) : 100=16.72%, 250=48.31%, 500=14.34%, 750=11.16% 00:26:51.225 cpu : usr=0.56%, sys=1.03%, ctx=1370, majf=0, minf=1 00:26:51.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:51.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.225 issued rwts: total=0,3020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.225 job4: (groupid=0, jobs=1): err= 0: pid=3740547: Tue Dec 10 12:29:57 2024 00:26:51.225 write: IOPS=252, BW=63.2MiB/s (66.2MB/s)(646MiB/10225msec); 0 zone resets 00:26:51.225 slat (usec): min=22, max=145365, avg=2883.59, stdev=8457.22 00:26:51.225 clat (usec): min=1637, max=728938, avg=250244.75, stdev=159988.26 00:26:51.225 lat (msec): min=2, max=728, avg=253.13, stdev=162.37 00:26:51.225 clat percentiles (msec): 00:26:51.225 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 47], 20.00th=[ 96], 00:26:51.225 | 30.00th=[ 130], 40.00th=[ 180], 50.00th=[ 211], 60.00th=[ 275], 00:26:51.225 | 70.00th=[ 384], 80.00th=[ 430], 90.00th=[ 468], 95.00th=[ 498], 00:26:51.225 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 642], 99.95th=[ 726], 00:26:51.225 | 99.99th=[ 726] 00:26:51.225 bw ( KiB/s): min=30720, max=148992, per=8.10%, avg=64512.00, stdev=35110.38, samples=20 00:26:51.225 iops : min= 120, max= 582, avg=252.00, stdev=137.15, samples=20 00:26:51.225 lat (msec) : 2=0.04%, 4=0.19%, 10=0.97%, 20=1.59%, 50=7.62% 00:26:51.225 lat (msec) : 100=11.46%, 250=34.87%, 500=38.66%, 750=4.61% 00:26:51.225 cpu : usr=0.47%, sys=0.99%, ctx=1406, majf=0, minf=1 00:26:51.225 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:51.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.225 issued rwts: total=0,2584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.225 job5: (groupid=0, jobs=1): err= 0: pid=3740548: Tue Dec 10 12:29:57 2024 00:26:51.225 write: IOPS=285, BW=71.3MiB/s (74.8MB/s)(726MiB/10174msec); 0 zone resets 00:26:51.225 slat (usec): min=25, max=53748, avg=2428.82, stdev=6425.48 00:26:51.225 clat (usec): min=1320, max=506306, avg=221848.23, stdev=122800.57 00:26:51.225 lat (usec): min=1356, max=506486, avg=224277.05, stdev=124202.86 00:26:51.225 clat percentiles (msec): 00:26:51.225 | 1.00th=[ 3], 5.00th=[ 28], 10.00th=[ 92], 20.00th=[ 124], 00:26:51.225 | 30.00th=[ 144], 40.00th=[ 169], 50.00th=[ 192], 60.00th=[ 224], 00:26:51.225 | 70.00th=[ 271], 80.00th=[ 347], 90.00th=[ 426], 95.00th=[ 439], 00:26:51.225 | 99.00th=[ 460], 99.50th=[ 472], 99.90th=[ 498], 99.95th=[ 502], 00:26:51.225 | 99.99th=[ 506] 00:26:51.225 bw ( KiB/s): min=36352, max=131584, per=9.13%, avg=72678.40, stdev=29973.23, samples=20 00:26:51.225 iops : min= 142, max= 514, avg=283.90, stdev=117.08, samples=20 00:26:51.225 lat (msec) : 2=0.52%, 4=2.17%, 10=0.83%, 20=1.21%, 50=1.10% 00:26:51.225 lat (msec) : 100=6.31%, 250=52.72%, 500=35.08%, 750=0.07% 00:26:51.225 cpu : usr=0.66%, sys=1.01%, ctx=1479, majf=0, minf=1 00:26:51.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:51.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.225 issued rwts: total=0,2902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.225 job6: (groupid=0, jobs=1): err= 0: pid=3740549: Tue Dec 10 12:29:57 2024 00:26:51.225 write: IOPS=287, BW=71.8MiB/s (75.2MB/s)(734MiB/10232msec); 0 zone resets 00:26:51.225 slat (usec): min=21, max=80982, avg=3164.71, stdev=6955.61 00:26:51.225 clat (usec): min=1174, max=556856, avg=219680.53, stdev=124996.00 00:26:51.225 lat (usec): min=1205, max=556896, avg=222845.24, stdev=126571.92 00:26:51.225 clat percentiles (msec): 00:26:51.225 | 1.00th=[ 3], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 82], 00:26:51.225 | 30.00th=[ 190], 40.00th=[ 209], 50.00th=[ 213], 60.00th=[ 226], 00:26:51.225 | 70.00th=[ 251], 80.00th=[ 296], 90.00th=[ 409], 95.00th=[ 485], 00:26:51.225 | 99.00th=[ 531], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 558], 00:26:51.225 | 99.99th=[ 558] 00:26:51.225 bw ( KiB/s): min=32768, max=265216, per=9.24%, avg=73548.80, stdev=48762.04, samples=20 00:26:51.225 iops : min= 128, max= 1036, avg=287.30, stdev=190.48, samples=20 00:26:51.225 lat (msec) : 2=0.41%, 4=1.19%, 10=0.58%, 50=2.15%, 100=17.09% 00:26:51.225 lat (msec) : 250=48.55%, 500=26.93%, 750=3.10% 00:26:51.225 cpu : usr=0.72%, sys=0.92%, ctx=944, majf=0, minf=1 00:26:51.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:51.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.225 issued rwts: total=0,2937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.226 job7: (groupid=0, jobs=1): err= 0: pid=3740550: Tue Dec 10 12:29:57 2024 00:26:51.226 write: IOPS=237, BW=59.5MiB/s (62.3MB/s)(608MiB/10230msec); 0 zone resets 00:26:51.226 slat (usec): min=24, max=42569, avg=4038.05, stdev=7885.91 00:26:51.226 clat (msec): min=29, max=676, avg=264.90, stdev=106.11 00:26:51.226 lat (msec): min=29, max=676, avg=268.93, stdev=107.52 00:26:51.226 clat percentiles (msec): 00:26:51.226 | 1.00th=[ 74], 5.00th=[ 138], 10.00th=[ 176], 20.00th=[ 201], 00:26:51.226 | 30.00th=[ 211], 40.00th=[ 215], 50.00th=[ 224], 60.00th=[ 236], 00:26:51.226 | 70.00th=[ 284], 80.00th=[ 338], 90.00th=[ 456], 95.00th=[ 485], 00:26:51.226 | 99.00th=[ 542], 99.50th=[ 575], 99.90th=[ 642], 99.95th=[ 676], 00:26:51.226 | 99.99th=[ 676] 00:26:51.226 bw ( KiB/s): min=32768, max=97280, per=7.62%, avg=60646.40, stdev=20910.99, samples=20 00:26:51.226 iops : min= 128, max= 380, avg=236.90, stdev=81.68, samples=20 00:26:51.226 lat (msec) : 50=0.33%, 100=1.56%, 250=61.16%, 500=33.42%, 750=3.53% 00:26:51.226 cpu : usr=0.69%, sys=0.83%, ctx=662, majf=0, minf=1 00:26:51.226 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:51.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.226 issued rwts: total=0,2433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.226 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.226 job8: (groupid=0, jobs=1): err= 0: pid=3740553: Tue Dec 10 12:29:57 2024 00:26:51.226 write: IOPS=234, BW=58.7MiB/s (61.6MB/s)(601MiB/10231msec); 0 zone resets 00:26:51.226 slat (usec): min=23, max=140376, avg=4047.17, stdev=9712.32 00:26:51.226 clat (usec): min=1752, max=544866, avg=268224.94, stdev=128771.55 00:26:51.226 lat (msec): min=2, max=544, avg=272.27, stdev=130.43 00:26:51.226 clat percentiles (msec): 00:26:51.226 | 1.00th=[ 16], 5.00th=[ 115], 10.00th=[ 125], 20.00th=[ 155], 00:26:51.226 | 30.00th=[ 182], 40.00th=[ 201], 50.00th=[ 241], 60.00th=[ 275], 00:26:51.226 | 70.00th=[ 326], 80.00th=[ 418], 90.00th=[ 472], 95.00th=[ 489], 00:26:51.226 | 99.00th=[ 518], 99.50th=[ 523], 99.90th=[ 527], 99.95th=[ 542], 00:26:51.226 | 99.99th=[ 542] 00:26:51.226 bw ( KiB/s): min=30720, max=130560, per=7.52%, avg=59878.40, stdev=25448.90, samples=20 00:26:51.226 iops : min= 120, max= 510, avg=233.90, stdev=99.41, samples=20 00:26:51.226 lat (msec) : 2=0.04%, 4=0.08%, 10=0.33%, 20=1.12%, 50=0.12% 00:26:51.226 lat (msec) : 100=2.62%, 250=48.77%, 500=43.28%, 750=3.62% 00:26:51.226 cpu : usr=0.76%, sys=0.82%, ctx=697, majf=0, minf=1 00:26:51.226 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:51.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.226 issued rwts: total=0,2403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.226 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.226 job9: (groupid=0, jobs=1): err= 0: pid=3740554: Tue Dec 10 12:29:57 2024 00:26:51.226 write: IOPS=251, BW=63.0MiB/s (66.0MB/s)(645MiB/10238msec); 0 zone resets 00:26:51.226 slat (usec): min=17, max=53246, avg=3661.89, stdev=7915.35 00:26:51.226 clat (msec): min=5, max=542, avg=250.37, stdev=123.38 00:26:51.226 lat (msec): min=5, max=542, avg=254.03, stdev=125.22 00:26:51.226 clat percentiles (msec): 00:26:51.226 | 1.00th=[ 51], 5.00th=[ 95], 10.00th=[ 101], 20.00th=[ 113], 00:26:51.226 | 30.00th=[ 161], 40.00th=[ 199], 50.00th=[ 245], 60.00th=[ 275], 00:26:51.226 | 70.00th=[ 317], 80.00th=[ 405], 90.00th=[ 435], 95.00th=[ 447], 00:26:51.226 | 99.00th=[ 481], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 542], 00:26:51.226 | 99.99th=[ 542] 00:26:51.226 bw ( KiB/s): min=34816, max=157696, per=8.09%, avg=64358.40, stdev=32765.52, samples=20 00:26:51.226 iops : min= 136, max= 616, avg=251.40, stdev=127.99, samples=20 00:26:51.226 lat (msec) : 10=0.12%, 20=0.04%, 50=0.89%, 100=8.77%, 250=42.05% 00:26:51.226 lat (msec) : 500=47.75%, 750=0.39% 00:26:51.226 cpu : usr=0.55%, sys=0.79%, ctx=902, majf=0, minf=1 00:26:51.226 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:51.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.226 issued rwts: total=0,2578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.226 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.226 job10: (groupid=0, jobs=1): err= 0: pid=3740555: Tue Dec 10 12:29:57 2024 00:26:51.226 write: IOPS=251, BW=62.8MiB/s (65.8MB/s)(642MiB/10230msec); 0 zone resets 00:26:51.226 slat (usec): min=24, max=96244, avg=3587.21, stdev=7682.64 00:26:51.226 clat (msec): min=2, max=676, avg=251.13, stdev=115.49 00:26:51.226 lat (msec): min=2, max=676, avg=254.72, stdev=117.01 00:26:51.226 clat percentiles (msec): 00:26:51.226 | 1.00th=[ 8], 5.00th=[ 75], 10.00th=[ 140], 20.00th=[ 188], 00:26:51.226 | 30.00th=[ 207], 40.00th=[ 211], 50.00th=[ 218], 60.00th=[ 232], 00:26:51.226 | 70.00th=[ 264], 80.00th=[ 334], 90.00th=[ 447], 95.00th=[ 481], 00:26:51.226 | 99.00th=[ 542], 99.50th=[ 575], 99.90th=[ 642], 99.95th=[ 676], 00:26:51.226 | 99.99th=[ 676] 00:26:51.226 bw ( KiB/s): min=32768, max=108032, per=8.06%, avg=64128.00, stdev=23490.92, samples=20 00:26:51.226 iops : min= 128, max= 422, avg=250.50, stdev=91.76, samples=20 00:26:51.226 lat (msec) : 4=0.19%, 10=1.44%, 20=1.60%, 50=0.66%, 100=2.57% 00:26:51.226 lat (msec) : 250=60.45%, 500=29.54%, 750=3.54% 00:26:51.226 cpu : usr=0.62%, sys=0.89%, ctx=871, majf=0, minf=2 00:26:51.226 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:26:51.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.226 issued rwts: total=0,2569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.226 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.226 00:26:51.226 Run status group 0 (all jobs): 00:26:51.226 WRITE: bw=777MiB/s (815MB/s), 58.7MiB/s-108MiB/s (61.6MB/s-113MB/s), io=7958MiB (8345MB), run=10093-10238msec 00:26:51.226 00:26:51.226 Disk stats (read/write): 00:26:51.226 nvme0n1: ios=50/8795, merge=0/0, ticks=3429/1245107, in_queue=1248536, util=99.89% 00:26:51.226 nvme10n1: ios=49/6067, merge=0/0, ticks=74/1232122, in_queue=1232196, util=97.88% 00:26:51.226 nvme1n1: ios=15/5838, merge=0/0, ticks=23/1226759, in_queue=1226782, util=97.66% 00:26:51.226 nvme2n1: ios=48/5720, merge=0/0, ticks=2876/1195236, in_queue=1198112, util=100.00% 00:26:51.226 nvme3n1: ios=15/5114, merge=0/0, ticks=107/1238053, in_queue=1238160, util=98.00% 00:26:51.226 nvme4n1: ios=43/5792, merge=0/0, ticks=1707/1247401, in_queue=1249108, util=100.00% 00:26:51.226 nvme5n1: ios=0/5818, merge=0/0, ticks=0/1229831, in_queue=1229831, util=98.30% 00:26:51.226 nvme6n1: ios=0/4813, merge=0/0, ticks=0/1226935, in_queue=1226935, util=98.41% 00:26:51.226 nvme7n1: ios=43/4751, merge=0/0, ticks=2518/1195000, in_queue=1197518, util=100.00% 00:26:51.226 nvme8n1: ios=48/5100, merge=0/0, ticks=629/1229648, in_queue=1230277, util=100.00% 00:26:51.226 nvme9n1: ios=0/5085, merge=0/0, ticks=0/1229769, in_queue=1229769, util=99.07% 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:51.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.226 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:51.792 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:51.792 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:51.792 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:51.792 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:51.792 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:51.792 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:51.792 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:51.793 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:51.793 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:51.793 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.793 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.793 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.793 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.793 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:52.380 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.380 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:53.023 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.023 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:53.281 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.281 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:53.847 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.847 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:54.105 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:54.105 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:54.105 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.105 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.105 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:54.105 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.105 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:54.363 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.363 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:54.363 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.363 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.363 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.363 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.363 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:54.620 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.620 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:54.878 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.878 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:55.136 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.136 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:55.702 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.702 rmmod nvme_tcp 00:26:55.702 rmmod nvme_fabrics 00:26:55.702 rmmod nvme_keyring 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3732741 ']' 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3732741 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3732741 ']' 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3732741 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3732741 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3732741' 00:26:55.702 killing process with pid 3732741 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3732741 00:26:55.702 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3732741 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.984 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:01.515 00:27:01.515 real 1m16.385s 00:27:01.515 user 4m40.393s 00:27:01.515 sys 0m15.088s 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:01.515 ************************************ 00:27:01.515 END TEST nvmf_multiconnection 00:27:01.515 ************************************ 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:01.515 ************************************ 00:27:01.515 START TEST nvmf_initiator_timeout 00:27:01.515 ************************************ 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:01.515 * Looking for test storage... 00:27:01.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:01.515 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.515 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.515 --rc genhtml_branch_coverage=1 00:27:01.515 --rc genhtml_function_coverage=1 00:27:01.515 --rc genhtml_legend=1 00:27:01.515 --rc geninfo_all_blocks=1 00:27:01.515 --rc geninfo_unexecuted_blocks=1 00:27:01.515 00:27:01.515 ' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.516 --rc genhtml_branch_coverage=1 00:27:01.516 --rc genhtml_function_coverage=1 00:27:01.516 --rc genhtml_legend=1 00:27:01.516 --rc geninfo_all_blocks=1 00:27:01.516 --rc geninfo_unexecuted_blocks=1 00:27:01.516 00:27:01.516 ' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.516 --rc genhtml_branch_coverage=1 00:27:01.516 --rc genhtml_function_coverage=1 00:27:01.516 --rc genhtml_legend=1 00:27:01.516 --rc geninfo_all_blocks=1 00:27:01.516 --rc geninfo_unexecuted_blocks=1 00:27:01.516 00:27:01.516 ' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.516 --rc genhtml_branch_coverage=1 00:27:01.516 --rc genhtml_function_coverage=1 00:27:01.516 --rc genhtml_legend=1 00:27:01.516 --rc geninfo_all_blocks=1 00:27:01.516 --rc geninfo_unexecuted_blocks=1 00:27:01.516 00:27:01.516 ' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:01.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:01.516 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:06.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:06.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:06.784 Found net devices under 0000:af:00.0: cvl_0_0 00:27:06.784 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:06.785 Found net devices under 0000:af:00.1: cvl_0_1 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:27:06.785 00:27:06.785 --- 10.0.0.2 ping statistics --- 00:27:06.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.785 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:27:06.785 00:27:06.785 --- 10.0.0.1 ping statistics --- 00:27:06.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.785 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3746863 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3746863 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3746863 ']' 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.785 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.785 [2024-12-10 12:30:13.376054] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:27:06.785 [2024-12-10 12:30:13.376151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.785 [2024-12-10 12:30:13.493746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.785 [2024-12-10 12:30:13.602650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.785 [2024-12-10 12:30:13.602694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.785 [2024-12-10 12:30:13.602704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.785 [2024-12-10 12:30:13.602714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.785 [2024-12-10 12:30:13.602721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.785 [2024-12-10 12:30:13.605153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.785 [2024-12-10 12:30:13.605271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.785 [2024-12-10 12:30:13.605291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.785 [2024-12-10 12:30:13.605300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.722 Malloc0 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.722 Delay0 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.722 [2024-12-10 12:30:14.336270] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.722 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.723 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.723 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.723 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.723 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.723 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.723 [2024-12-10 12:30:14.368558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.723 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.723 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:08.658 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:08.658 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:08.658 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:08.658 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:08.658 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3747566 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:11.191 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:11.191 [global] 00:27:11.191 thread=1 00:27:11.191 invalidate=1 00:27:11.191 rw=write 00:27:11.191 time_based=1 00:27:11.191 runtime=60 00:27:11.191 ioengine=libaio 00:27:11.191 direct=1 00:27:11.191 bs=4096 00:27:11.191 iodepth=1 00:27:11.191 norandommap=0 00:27:11.191 numjobs=1 00:27:11.191 00:27:11.191 verify_dump=1 00:27:11.191 verify_backlog=512 00:27:11.191 verify_state_save=0 00:27:11.191 do_verify=1 00:27:11.191 verify=crc32c-intel 00:27:11.191 [job0] 00:27:11.191 filename=/dev/nvme0n1 00:27:11.191 Could not set queue depth (nvme0n1) 00:27:11.191 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:11.191 fio-3.35 00:27:11.191 Starting 1 thread 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.724 true 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.724 true 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.724 true 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.724 true 00:27:13.724 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.982 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.265 true 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.265 true 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.265 true 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.265 true 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:17.265 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3747566 00:28:13.485 00:28:13.485 job0: (groupid=0, jobs=1): err= 0: pid=3747736: Tue Dec 10 12:31:17 2024 00:28:13.485 read: IOPS=84, BW=338KiB/s (347kB/s)(19.8MiB/60024msec) 00:28:13.485 slat (nsec): min=6226, max=41490, avg=8664.74, stdev=4843.26 00:28:13.485 clat (usec): min=249, max=41609k, avg=11579.37, stdev=583955.76 00:28:13.485 lat (usec): min=256, max=41609k, avg=11588.04, stdev=583955.92 00:28:13.485 clat percentiles (usec): 00:28:13.485 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 273], 00:28:13.485 | 20.00th=[ 281], 30.00th=[ 285], 40.00th=[ 289], 00:28:13.485 | 50.00th=[ 293], 60.00th=[ 297], 70.00th=[ 306], 00:28:13.485 | 80.00th=[ 314], 90.00th=[ 347], 95.00th=[ 41157], 00:28:13.485 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:28:13.485 | 99.95th=[ 42206], 99.99th=[17112761] 00:28:13.485 write: IOPS=85, BW=341KiB/s (349kB/s)(20.0MiB/60024msec); 0 zone resets 00:28:13.485 slat (usec): min=9, max=24975, avg=19.46, stdev=403.28 00:28:13.485 clat (usec): min=165, max=1106, avg=205.57, stdev=21.36 00:28:13.485 lat (usec): min=175, max=25328, avg=225.03, stdev=406.82 00:28:13.485 clat percentiles (usec): 00:28:13.485 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:28:13.485 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:28:13.485 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 227], 95.00th=[ 233], 00:28:13.485 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 285], 99.95th=[ 355], 00:28:13.485 | 99.99th=[ 1106] 00:28:13.485 bw ( KiB/s): min= 2928, max= 8192, per=100.00%, avg=5851.43, stdev=2073.01, samples=7 00:28:13.485 iops : min= 732, max= 2048, avg=1462.86, stdev=518.25, samples=7 00:28:13.485 lat (usec) : 250=49.91%, 500=46.21%, 750=0.10% 00:28:13.485 lat (msec) : 2=0.01%, 50=3.77%, >=2000=0.01% 00:28:13.485 cpu : usr=0.10%, sys=0.17%, ctx=10205, majf=0, minf=1 00:28:13.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.485 issued rwts: total=5078,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:13.485 00:28:13.485 Run status group 0 (all jobs): 00:28:13.485 READ: bw=338KiB/s (347kB/s), 338KiB/s-338KiB/s (347kB/s-347kB/s), io=19.8MiB (20.8MB), run=60024-60024msec 00:28:13.485 WRITE: bw=341KiB/s (349kB/s), 341KiB/s-341KiB/s (349kB/s-349kB/s), io=20.0MiB (21.0MB), run=60024-60024msec 00:28:13.485 00:28:13.485 Disk stats (read/write): 00:28:13.485 nvme0n1: ios=5126/5120, merge=0/0, ticks=18313/1023, in_queue=19336, util=99.92% 00:28:13.485 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:13.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:13.485 nvmf hotplug test: fio successful as expected 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.485 rmmod nvme_tcp 00:28:13.485 rmmod nvme_fabrics 00:28:13.485 rmmod nvme_keyring 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3746863 ']' 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3746863 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3746863 ']' 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3746863 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3746863 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3746863' 00:28:13.485 killing process with pid 3746863 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3746863 00:28:13.485 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3746863 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.485 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.387 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.387 00:28:15.387 real 1m14.033s 00:28:15.387 user 4m29.084s 00:28:15.387 sys 0m6.101s 00:28:15.387 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.387 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:15.387 ************************************ 00:28:15.387 END TEST nvmf_initiator_timeout 00:28:15.387 ************************************ 00:28:15.387 12:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:15.387 12:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:15.387 12:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:15.387 12:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.387 12:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:20.653 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:20.653 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:20.653 Found net devices under 0000:af:00.0: cvl_0_0 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:20.653 Found net devices under 0000:af:00.1: cvl_0_1 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.653 12:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.654 ************************************ 00:28:20.654 START TEST nvmf_perf_adq 00:28:20.654 ************************************ 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:20.654 * Looking for test storage... 00:28:20.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:20.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.654 --rc genhtml_branch_coverage=1 00:28:20.654 --rc genhtml_function_coverage=1 00:28:20.654 --rc genhtml_legend=1 00:28:20.654 --rc geninfo_all_blocks=1 00:28:20.654 --rc geninfo_unexecuted_blocks=1 00:28:20.654 00:28:20.654 ' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:20.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.654 --rc genhtml_branch_coverage=1 00:28:20.654 --rc genhtml_function_coverage=1 00:28:20.654 --rc genhtml_legend=1 00:28:20.654 --rc geninfo_all_blocks=1 00:28:20.654 --rc geninfo_unexecuted_blocks=1 00:28:20.654 00:28:20.654 ' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:20.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.654 --rc genhtml_branch_coverage=1 00:28:20.654 --rc genhtml_function_coverage=1 00:28:20.654 --rc genhtml_legend=1 00:28:20.654 --rc geninfo_all_blocks=1 00:28:20.654 --rc geninfo_unexecuted_blocks=1 00:28:20.654 00:28:20.654 ' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:20.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.654 --rc genhtml_branch_coverage=1 00:28:20.654 --rc genhtml_function_coverage=1 00:28:20.654 --rc genhtml_legend=1 00:28:20.654 --rc geninfo_all_blocks=1 00:28:20.654 --rc geninfo_unexecuted_blocks=1 00:28:20.654 00:28:20.654 ' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:20.654 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.655 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.919 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:25.920 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:25.920 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:25.920 Found net devices under 0000:af:00.0: cvl_0_0 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:25.920 Found net devices under 0000:af:00.1: cvl_0_1 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:25.920 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:26.854 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:29.385 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.670 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:34.671 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:34.671 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:34.671 Found net devices under 0000:af:00.0: cvl_0_0 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:34.671 Found net devices under 0000:af:00.1: cvl_0_1 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.671 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.745 ms 00:28:34.671 00:28:34.671 --- 10.0.0.2 ping statistics --- 00:28:34.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.671 rtt min/avg/max/mdev = 0.745/0.745/0.745/0.000 ms 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:28:34.671 00:28:34.671 --- 10.0.0.1 ping statistics --- 00:28:34.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.671 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.671 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3765399 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3765399 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3765399 ']' 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.672 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.672 [2024-12-10 12:31:41.293279] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:34.672 [2024-12-10 12:31:41.293373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.672 [2024-12-10 12:31:41.411350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.929 [2024-12-10 12:31:41.525507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.929 [2024-12-10 12:31:41.525547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.929 [2024-12-10 12:31:41.525558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.929 [2024-12-10 12:31:41.525569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.929 [2024-12-10 12:31:41.525578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.929 [2024-12-10 12:31:41.528015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.929 [2024-12-10 12:31:41.528096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.929 [2024-12-10 12:31:41.528157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.929 [2024-12-10 12:31:41.528176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.494 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.494 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:35.494 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.494 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.494 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.494 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.495 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.753 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.753 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:35.753 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.753 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.753 [2024-12-10 12:31:42.493921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.753 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.753 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:35.753 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.753 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.011 Malloc1 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.011 [2024-12-10 12:31:42.607009] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3765655 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:36.011 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:37.910 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:37.910 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.910 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.910 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.910 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:37.910 "tick_rate": 2100000000, 00:28:37.910 "poll_groups": [ 00:28:37.910 { 00:28:37.910 "name": "nvmf_tgt_poll_group_000", 00:28:37.910 "admin_qpairs": 1, 00:28:37.910 "io_qpairs": 1, 00:28:37.910 "current_admin_qpairs": 1, 00:28:37.910 "current_io_qpairs": 1, 00:28:37.910 "pending_bdev_io": 0, 00:28:37.911 "completed_nvme_io": 18160, 00:28:37.911 "transports": [ 00:28:37.911 { 00:28:37.911 "trtype": "TCP" 00:28:37.911 } 00:28:37.911 ] 00:28:37.911 }, 00:28:37.911 { 00:28:37.911 "name": "nvmf_tgt_poll_group_001", 00:28:37.911 "admin_qpairs": 0, 00:28:37.911 "io_qpairs": 1, 00:28:37.911 "current_admin_qpairs": 0, 00:28:37.911 "current_io_qpairs": 1, 00:28:37.911 "pending_bdev_io": 0, 00:28:37.911 "completed_nvme_io": 18184, 00:28:37.911 "transports": [ 00:28:37.911 { 00:28:37.911 "trtype": "TCP" 00:28:37.911 } 00:28:37.911 ] 00:28:37.911 }, 00:28:37.911 { 00:28:37.911 "name": "nvmf_tgt_poll_group_002", 00:28:37.911 "admin_qpairs": 0, 00:28:37.911 "io_qpairs": 1, 00:28:37.911 "current_admin_qpairs": 0, 00:28:37.911 "current_io_qpairs": 1, 00:28:37.911 "pending_bdev_io": 0, 00:28:37.911 "completed_nvme_io": 18335, 00:28:37.911 "transports": [ 00:28:37.911 { 00:28:37.911 "trtype": "TCP" 00:28:37.911 } 00:28:37.911 ] 00:28:37.911 }, 00:28:37.911 { 00:28:37.911 "name": "nvmf_tgt_poll_group_003", 00:28:37.911 "admin_qpairs": 0, 00:28:37.911 "io_qpairs": 1, 00:28:37.911 "current_admin_qpairs": 0, 00:28:37.911 "current_io_qpairs": 1, 00:28:37.911 "pending_bdev_io": 0, 00:28:37.911 "completed_nvme_io": 17863, 00:28:37.911 "transports": [ 00:28:37.911 { 00:28:37.911 "trtype": "TCP" 00:28:37.911 } 00:28:37.911 ] 00:28:37.911 } 00:28:37.911 ] 00:28:37.911 }' 00:28:37.911 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:37.911 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:37.911 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:37.911 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:37.911 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3765655 00:28:46.085 Initializing NVMe Controllers 00:28:46.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:46.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:46.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:46.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:46.085 Initialization complete. Launching workers. 00:28:46.085 ======================================================== 00:28:46.085 Latency(us) 00:28:46.085 Device Information : IOPS MiB/s Average min max 00:28:46.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9903.00 38.68 6463.75 1849.18 11323.54 00:28:46.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9880.10 38.59 6478.93 2833.76 10840.98 00:28:46.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10094.00 39.43 6341.17 2302.30 11122.37 00:28:46.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9847.10 38.47 6501.22 2139.95 10827.20 00:28:46.085 ======================================================== 00:28:46.085 Total : 39724.20 155.17 6445.67 1849.18 11323.54 00:28:46.085 00:28:46.085 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:46.085 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.085 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:46.085 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.085 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:46.085 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.085 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.085 rmmod nvme_tcp 00:28:46.085 rmmod nvme_fabrics 00:28:46.085 rmmod nvme_keyring 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3765399 ']' 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3765399 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3765399 ']' 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3765399 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3765399 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3765399' 00:28:46.379 killing process with pid 3765399 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3765399 00:28:46.379 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3765399 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.763 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.665 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.665 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:49.665 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:49.665 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:51.039 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:53.570 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.835 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:58.836 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:58.836 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:58.836 Found net devices under 0000:af:00.0: cvl_0_0 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:58.836 Found net devices under 0000:af:00.1: cvl_0_1 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:28:58.836 00:28:58.836 --- 10.0.0.2 ping statistics --- 00:28:58.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.836 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:28:58.836 00:28:58.836 --- 10.0.0.1 ping statistics --- 00:28:58.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.836 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:58.836 net.core.busy_poll = 1 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:58.836 net.core.busy_read = 1 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:58.836 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:58.837 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3769812 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3769812 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3769812 ']' 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.094 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.094 [2024-12-10 12:32:05.757269] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:59.094 [2024-12-10 12:32:05.757355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.094 [2024-12-10 12:32:05.875356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.352 [2024-12-10 12:32:05.983238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.352 [2024-12-10 12:32:05.983284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.352 [2024-12-10 12:32:05.983294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.352 [2024-12-10 12:32:05.983304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.352 [2024-12-10 12:32:05.983312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.352 [2024-12-10 12:32:05.985471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.352 [2024-12-10 12:32:05.985546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.352 [2024-12-10 12:32:05.985603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.352 [2024-12-10 12:32:05.985617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.917 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.176 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.176 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:00.176 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.176 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.434 [2024-12-10 12:32:07.002356] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.434 Malloc1 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.434 [2024-12-10 12:32:07.133801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3770096 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:00.434 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:02.331 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:02.331 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.331 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:02.589 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.589 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:02.589 "tick_rate": 2100000000, 00:29:02.589 "poll_groups": [ 00:29:02.589 { 00:29:02.589 "name": "nvmf_tgt_poll_group_000", 00:29:02.589 "admin_qpairs": 1, 00:29:02.589 "io_qpairs": 2, 00:29:02.589 "current_admin_qpairs": 1, 00:29:02.589 "current_io_qpairs": 2, 00:29:02.589 "pending_bdev_io": 0, 00:29:02.589 "completed_nvme_io": 25224, 00:29:02.589 "transports": [ 00:29:02.589 { 00:29:02.589 "trtype": "TCP" 00:29:02.589 } 00:29:02.589 ] 00:29:02.589 }, 00:29:02.589 { 00:29:02.589 "name": "nvmf_tgt_poll_group_001", 00:29:02.589 "admin_qpairs": 0, 00:29:02.589 "io_qpairs": 2, 00:29:02.589 "current_admin_qpairs": 0, 00:29:02.589 "current_io_qpairs": 2, 00:29:02.589 "pending_bdev_io": 0, 00:29:02.589 "completed_nvme_io": 25245, 00:29:02.589 "transports": [ 00:29:02.589 { 00:29:02.589 "trtype": "TCP" 00:29:02.589 } 00:29:02.589 ] 00:29:02.589 }, 00:29:02.589 { 00:29:02.589 "name": "nvmf_tgt_poll_group_002", 00:29:02.589 "admin_qpairs": 0, 00:29:02.589 "io_qpairs": 0, 00:29:02.589 "current_admin_qpairs": 0, 00:29:02.589 "current_io_qpairs": 0, 00:29:02.589 "pending_bdev_io": 0, 00:29:02.589 "completed_nvme_io": 0, 00:29:02.589 "transports": [ 00:29:02.589 { 00:29:02.589 "trtype": "TCP" 00:29:02.589 } 00:29:02.589 ] 00:29:02.589 }, 00:29:02.589 { 00:29:02.589 "name": "nvmf_tgt_poll_group_003", 00:29:02.589 "admin_qpairs": 0, 00:29:02.589 "io_qpairs": 0, 00:29:02.589 "current_admin_qpairs": 0, 00:29:02.589 "current_io_qpairs": 0, 00:29:02.589 "pending_bdev_io": 0, 00:29:02.589 "completed_nvme_io": 0, 00:29:02.589 "transports": [ 00:29:02.589 { 00:29:02.589 "trtype": "TCP" 00:29:02.589 } 00:29:02.589 ] 00:29:02.589 } 00:29:02.589 ] 00:29:02.589 }' 00:29:02.589 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:02.589 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:02.589 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:02.589 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:02.589 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3770096 00:29:10.699 Initializing NVMe Controllers 00:29:10.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:10.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:10.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:10.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:10.699 Initialization complete. Launching workers. 00:29:10.699 ======================================================== 00:29:10.699 Latency(us) 00:29:10.699 Device Information : IOPS MiB/s Average min max 00:29:10.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7447.40 29.09 8594.23 1574.51 53584.29 00:29:10.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7388.60 28.86 8660.13 1593.81 54018.87 00:29:10.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6266.20 24.48 10214.25 1524.17 53326.84 00:29:10.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6402.30 25.01 9994.94 1633.91 54519.45 00:29:10.699 ======================================================== 00:29:10.699 Total : 27504.50 107.44 9307.06 1524.17 54519.45 00:29:10.699 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.699 rmmod nvme_tcp 00:29:10.699 rmmod nvme_fabrics 00:29:10.699 rmmod nvme_keyring 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3769812 ']' 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3769812 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3769812 ']' 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3769812 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3769812 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3769812' 00:29:10.699 killing process with pid 3769812 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3769812 00:29:10.699 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3769812 00:29:12.073 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.073 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.073 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.073 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:12.074 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:12.074 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.074 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.074 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.074 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.074 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.074 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.074 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:14.605 00:29:14.605 real 0m53.764s 00:29:14.605 user 2m58.106s 00:29:14.605 sys 0m10.421s 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.605 ************************************ 00:29:14.605 END TEST nvmf_perf_adq 00:29:14.605 ************************************ 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:14.605 ************************************ 00:29:14.605 START TEST nvmf_shutdown 00:29:14.605 ************************************ 00:29:14.605 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:14.605 * Looking for test storage... 00:29:14.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:14.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.605 --rc genhtml_branch_coverage=1 00:29:14.605 --rc genhtml_function_coverage=1 00:29:14.605 --rc genhtml_legend=1 00:29:14.605 --rc geninfo_all_blocks=1 00:29:14.605 --rc geninfo_unexecuted_blocks=1 00:29:14.605 00:29:14.605 ' 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:14.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.605 --rc genhtml_branch_coverage=1 00:29:14.605 --rc genhtml_function_coverage=1 00:29:14.605 --rc genhtml_legend=1 00:29:14.605 --rc geninfo_all_blocks=1 00:29:14.605 --rc geninfo_unexecuted_blocks=1 00:29:14.605 00:29:14.605 ' 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:14.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.605 --rc genhtml_branch_coverage=1 00:29:14.605 --rc genhtml_function_coverage=1 00:29:14.605 --rc genhtml_legend=1 00:29:14.605 --rc geninfo_all_blocks=1 00:29:14.605 --rc geninfo_unexecuted_blocks=1 00:29:14.605 00:29:14.605 ' 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:14.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.605 --rc genhtml_branch_coverage=1 00:29:14.605 --rc genhtml_function_coverage=1 00:29:14.605 --rc genhtml_legend=1 00:29:14.605 --rc geninfo_all_blocks=1 00:29:14.605 --rc geninfo_unexecuted_blocks=1 00:29:14.605 00:29:14.605 ' 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.605 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.606 ************************************ 00:29:14.606 START TEST nvmf_shutdown_tc1 00:29:14.606 ************************************ 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.606 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.873 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:19.874 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:19.874 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:19.874 Found net devices under 0000:af:00.0: cvl_0_0 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:19.874 Found net devices under 0000:af:00.1: cvl_0_1 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:29:19.874 00:29:19.874 --- 10.0.0.2 ping statistics --- 00:29:19.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.874 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:29:19.874 00:29:19.874 --- 10.0.0.1 ping statistics --- 00:29:19.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.874 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:29:19.874 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.875 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:19.875 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.875 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.875 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.875 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.875 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.875 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.875 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3775312 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3775312 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3775312 ']' 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.133 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 [2024-12-10 12:32:26.817122] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:20.133 [2024-12-10 12:32:26.817219] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.133 [2024-12-10 12:32:26.934840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.392 [2024-12-10 12:32:27.042227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.392 [2024-12-10 12:32:27.042272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.392 [2024-12-10 12:32:27.042282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.392 [2024-12-10 12:32:27.042291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.392 [2024-12-10 12:32:27.042300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.392 [2024-12-10 12:32:27.044537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.392 [2024-12-10 12:32:27.044609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.392 [2024-12-10 12:32:27.044693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.392 [2024-12-10 12:32:27.044714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:20.958 [2024-12-10 12:32:27.672960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.958 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.216 Malloc1 00:29:21.216 [2024-12-10 12:32:27.837303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.216 Malloc2 00:29:21.216 Malloc3 00:29:21.473 Malloc4 00:29:21.473 Malloc5 00:29:21.730 Malloc6 00:29:21.730 Malloc7 00:29:21.730 Malloc8 00:29:21.990 Malloc9 00:29:21.990 Malloc10 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3775778 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3775778 /var/tmp/bdevperf.sock 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3775778 ']' 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:21.990 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:21.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:21.991 { 00:29:21.991 "params": { 00:29:21.991 "name": "Nvme$subsystem", 00:29:21.991 "trtype": "$TEST_TRANSPORT", 00:29:21.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.991 "adrfam": "ipv4", 00:29:21.991 "trsvcid": "$NVMF_PORT", 00:29:21.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.991 "hdgst": ${hdgst:-false}, 00:29:21.991 "ddgst": ${ddgst:-false} 00:29:21.991 }, 00:29:21.991 "method": "bdev_nvme_attach_controller" 00:29:21.991 } 00:29:21.991 EOF 00:29:21.991 )") 00:29:21.991 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.250 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.250 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.250 { 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme$subsystem", 00:29:22.250 "trtype": "$TEST_TRANSPORT", 00:29:22.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "$NVMF_PORT", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.250 "hdgst": ${hdgst:-false}, 00:29:22.250 "ddgst": ${ddgst:-false} 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 } 00:29:22.250 EOF 00:29:22.250 )") 00:29:22.250 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.250 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:22.250 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:22.250 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme1", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme2", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme3", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme4", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme5", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme6", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme7", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme8", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme9", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 },{ 00:29:22.250 "params": { 00:29:22.250 "name": "Nvme10", 00:29:22.250 "trtype": "tcp", 00:29:22.250 "traddr": "10.0.0.2", 00:29:22.250 "adrfam": "ipv4", 00:29:22.250 "trsvcid": "4420", 00:29:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:22.250 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:22.250 "hdgst": false, 00:29:22.250 "ddgst": false 00:29:22.250 }, 00:29:22.250 "method": "bdev_nvme_attach_controller" 00:29:22.250 }' 00:29:22.250 [2024-12-10 12:32:28.829173] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:22.250 [2024-12-10 12:32:28.829275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:22.250 [2024-12-10 12:32:28.945095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.250 [2024-12-10 12:32:29.057683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3775778 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:24.148 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:25.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3775778 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3775312 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.083 { 00:29:25.083 "params": { 00:29:25.083 "name": "Nvme$subsystem", 00:29:25.083 "trtype": "$TEST_TRANSPORT", 00:29:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.083 "adrfam": "ipv4", 00:29:25.083 "trsvcid": "$NVMF_PORT", 00:29:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.083 "hdgst": ${hdgst:-false}, 00:29:25.083 "ddgst": ${ddgst:-false} 00:29:25.083 }, 00:29:25.083 "method": "bdev_nvme_attach_controller" 00:29:25.083 } 00:29:25.083 EOF 00:29:25.083 )") 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:25.083 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:25.083 [2024-12-10 12:32:31.687570] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:25.083 [2024-12-10 12:32:31.687658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776251 ] 00:29:25.084 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:25.084 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme1", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme2", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme3", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme4", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme5", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme6", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme7", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme8", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme9", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 },{ 00:29:25.084 "params": { 00:29:25.084 "name": "Nvme10", 00:29:25.084 "trtype": "tcp", 00:29:25.084 "traddr": "10.0.0.2", 00:29:25.084 "adrfam": "ipv4", 00:29:25.084 "trsvcid": "4420", 00:29:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:25.084 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:25.084 "hdgst": false, 00:29:25.084 "ddgst": false 00:29:25.084 }, 00:29:25.084 "method": "bdev_nvme_attach_controller" 00:29:25.084 }' 00:29:25.084 [2024-12-10 12:32:31.802903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.342 [2024-12-10 12:32:31.915231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.242 Running I/O for 1 seconds... 00:29:28.437 1809.00 IOPS, 113.06 MiB/s 00:29:28.437 Latency(us) 00:29:28.437 [2024-12-10T11:32:35.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.437 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme1n1 : 1.11 234.69 14.67 0.00 0.00 268216.76 5804.62 261644.68 00:29:28.437 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme2n1 : 1.12 233.75 14.61 0.00 0.00 264280.56 2527.82 242670.45 00:29:28.437 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme3n1 : 1.10 232.60 14.54 0.00 0.00 260674.56 17226.61 245666.38 00:29:28.437 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme4n1 : 1.16 275.09 17.19 0.00 0.00 215373.92 12919.95 241671.80 00:29:28.437 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme5n1 : 1.15 223.06 13.94 0.00 0.00 261698.32 20097.71 243669.09 00:29:28.437 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme6n1 : 1.15 225.12 14.07 0.00 0.00 253511.61 2262.55 245666.38 00:29:28.437 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme7n1 : 1.21 263.41 16.46 0.00 0.00 212987.61 6147.90 239674.51 00:29:28.437 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme8n1 : 1.16 221.18 13.82 0.00 0.00 247678.78 15728.64 246665.02 00:29:28.437 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme9n1 : 1.23 260.90 16.31 0.00 0.00 207596.79 9549.53 247663.66 00:29:28.437 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.437 Verification LBA range: start 0x0 length 0x400 00:29:28.437 Nvme10n1 : 1.23 260.33 16.27 0.00 0.00 203496.11 11421.99 267636.54 00:29:28.437 [2024-12-10T11:32:35.263Z] =================================================================================================================== 00:29:28.437 [2024-12-10T11:32:35.263Z] Total : 2430.12 151.88 0.00 0.00 236968.98 2262.55 267636.54 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.369 rmmod nvme_tcp 00:29:29.369 rmmod nvme_fabrics 00:29:29.369 rmmod nvme_keyring 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3775312 ']' 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3775312 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3775312 ']' 00:29:29.369 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3775312 00:29:29.370 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:29.370 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.627 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3775312 00:29:29.627 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:29.627 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:29.627 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3775312' 00:29:29.627 killing process with pid 3775312 00:29:29.628 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3775312 00:29:29.628 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3775312 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.909 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.811 00:29:34.811 real 0m20.139s 00:29:34.811 user 0m55.063s 00:29:34.811 sys 0m5.818s 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.811 ************************************ 00:29:34.811 END TEST nvmf_shutdown_tc1 00:29:34.811 ************************************ 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:34.811 ************************************ 00:29:34.811 START TEST nvmf_shutdown_tc2 00:29:34.811 ************************************ 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:34.811 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:34.811 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:34.811 Found net devices under 0000:af:00.0: cvl_0_0 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.811 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:34.812 Found net devices under 0000:af:00.1: cvl_0_1 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.812 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:29:35.070 00:29:35.070 --- 10.0.0.2 ping statistics --- 00:29:35.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.070 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:29:35.070 00:29:35.070 --- 10.0.0.1 ping statistics --- 00:29:35.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.070 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3777928 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3777928 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3777928 ']' 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.070 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.070 [2024-12-10 12:32:41.798946] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:35.070 [2024-12-10 12:32:41.799038] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.328 [2024-12-10 12:32:41.916513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:35.328 [2024-12-10 12:32:42.020775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.328 [2024-12-10 12:32:42.020817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.328 [2024-12-10 12:32:42.020826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.328 [2024-12-10 12:32:42.020837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.328 [2024-12-10 12:32:42.020844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.328 [2024-12-10 12:32:42.023156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.328 [2024-12-10 12:32:42.023238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.328 [2024-12-10 12:32:42.023336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.328 [2024-12-10 12:32:42.023358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.893 [2024-12-10 12:32:42.667598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.893 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.151 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.151 Malloc1 00:29:36.151 [2024-12-10 12:32:42.825974] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.151 Malloc2 00:29:36.409 Malloc3 00:29:36.409 Malloc4 00:29:36.409 Malloc5 00:29:36.666 Malloc6 00:29:36.667 Malloc7 00:29:36.667 Malloc8 00:29:36.924 Malloc9 00:29:36.924 Malloc10 00:29:36.924 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.924 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:36.924 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.924 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.182 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3778229 00:29:37.182 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3778229 /var/tmp/bdevperf.sock 00:29:37.182 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3778229 ']' 00:29:37.182 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:37.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:37.183 { 00:29:37.183 "params": { 00:29:37.183 "name": "Nvme$subsystem", 00:29:37.183 "trtype": "$TEST_TRANSPORT", 00:29:37.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.183 "adrfam": "ipv4", 00:29:37.183 "trsvcid": "$NVMF_PORT", 00:29:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.183 "hdgst": ${hdgst:-false}, 00:29:37.183 "ddgst": ${ddgst:-false} 00:29:37.183 }, 00:29:37.183 "method": "bdev_nvme_attach_controller" 00:29:37.183 } 00:29:37.183 EOF 00:29:37.183 )") 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:37.183 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:37.184 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme1", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme2", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme3", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme4", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme5", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme6", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme7", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme8", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme9", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 },{ 00:29:37.184 "params": { 00:29:37.184 "name": "Nvme10", 00:29:37.184 "trtype": "tcp", 00:29:37.184 "traddr": "10.0.0.2", 00:29:37.184 "adrfam": "ipv4", 00:29:37.184 "trsvcid": "4420", 00:29:37.184 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:37.184 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:37.184 "hdgst": false, 00:29:37.184 "ddgst": false 00:29:37.184 }, 00:29:37.184 "method": "bdev_nvme_attach_controller" 00:29:37.184 }' 00:29:37.184 [2024-12-10 12:32:43.825826] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:37.184 [2024-12-10 12:32:43.825918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778229 ] 00:29:37.184 [2024-12-10 12:32:43.944906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.442 [2024-12-10 12:32:44.063427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.341 Running I/O for 10 seconds... 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.599 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.857 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.857 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:39.857 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:39.857 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3778229 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3778229 ']' 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3778229 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3778229 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3778229' 00:29:40.115 killing process with pid 3778229 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3778229 00:29:40.115 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3778229 00:29:40.115 Received shutdown signal, test time was about 0.989011 seconds 00:29:40.115 00:29:40.115 Latency(us) 00:29:40.115 [2024-12-10T11:32:46.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.115 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme1n1 : 0.98 265.06 16.57 0.00 0.00 237910.92 5554.96 239674.51 00:29:40.115 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme2n1 : 0.99 259.02 16.19 0.00 0.00 240265.02 18350.08 248662.31 00:29:40.115 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme3n1 : 0.96 265.31 16.58 0.00 0.00 230226.16 18100.42 243669.09 00:29:40.115 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme4n1 : 0.96 267.56 16.72 0.00 0.00 223776.18 16602.45 222697.57 00:29:40.115 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme5n1 : 0.97 263.89 16.49 0.00 0.00 223063.04 18474.91 243669.09 00:29:40.115 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme6n1 : 0.94 203.85 12.74 0.00 0.00 282522.90 20971.52 249660.95 00:29:40.115 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme7n1 : 0.98 266.98 16.69 0.00 0.00 211638.32 3495.25 243669.09 00:29:40.115 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme8n1 : 0.98 264.38 16.52 0.00 0.00 209896.80 4837.18 221698.93 00:29:40.115 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme9n1 : 0.95 206.26 12.89 0.00 0.00 261539.03 2933.52 247663.66 00:29:40.115 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:40.115 Verification LBA range: start 0x0 length 0x400 00:29:40.115 Nvme10n1 : 0.96 200.46 12.53 0.00 0.00 264669.54 19223.89 267636.54 00:29:40.115 [2024-12-10T11:32:46.941Z] =================================================================================================================== 00:29:40.115 [2024-12-10T11:32:46.941Z] Total : 2462.79 153.92 0.00 0.00 235986.08 2933.52 267636.54 00:29:41.487 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3777928 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.476 rmmod nvme_tcp 00:29:42.476 rmmod nvme_fabrics 00:29:42.476 rmmod nvme_keyring 00:29:42.476 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3777928 ']' 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3777928 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3777928 ']' 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3777928 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3777928 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3777928' 00:29:42.476 killing process with pid 3777928 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3777928 00:29:42.476 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3777928 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.799 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.701 00:29:47.701 real 0m12.730s 00:29:47.701 user 0m43.291s 00:29:47.701 sys 0m1.705s 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.701 ************************************ 00:29:47.701 END TEST nvmf_shutdown_tc2 00:29:47.701 ************************************ 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:47.701 ************************************ 00:29:47.701 START TEST nvmf_shutdown_tc3 00:29:47.701 ************************************ 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:47.701 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:47.701 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.701 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:47.702 Found net devices under 0000:af:00.0: cvl_0_0 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:47.702 Found net devices under 0000:af:00.1: cvl_0_1 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:29:47.702 00:29:47.702 --- 10.0.0.2 ping statistics --- 00:29:47.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.702 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:29:47.702 00:29:47.702 --- 10.0.0.1 ping statistics --- 00:29:47.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.702 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3780116 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3780116 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3780116 ']' 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.702 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.961 [2024-12-10 12:32:54.565885] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:47.961 [2024-12-10 12:32:54.565971] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.961 [2024-12-10 12:32:54.683712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.219 [2024-12-10 12:32:54.791657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.219 [2024-12-10 12:32:54.791697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.219 [2024-12-10 12:32:54.791707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.219 [2024-12-10 12:32:54.791717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.219 [2024-12-10 12:32:54.791724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.219 [2024-12-10 12:32:54.794224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.219 [2024-12-10 12:32:54.794301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.219 [2024-12-10 12:32:54.794381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.219 [2024-12-10 12:32:54.794404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.785 [2024-12-10 12:32:55.440061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.785 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.785 Malloc1 00:29:49.043 [2024-12-10 12:32:55.612520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.043 Malloc2 00:29:49.043 Malloc3 00:29:49.301 Malloc4 00:29:49.301 Malloc5 00:29:49.301 Malloc6 00:29:49.558 Malloc7 00:29:49.558 Malloc8 00:29:49.815 Malloc9 00:29:49.815 Malloc10 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3780432 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3780432 /var/tmp/bdevperf.sock 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3780432 ']' 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:49.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.815 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.816 { 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme$subsystem", 00:29:49.816 "trtype": "$TEST_TRANSPORT", 00:29:49.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "$NVMF_PORT", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.816 "hdgst": ${hdgst:-false}, 00:29:49.816 "ddgst": ${ddgst:-false} 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 } 00:29:49.816 EOF 00:29:49.816 )") 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:49.816 [2024-12-10 12:32:56.640179] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:49.816 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme1", 00:29:49.816 "trtype": "tcp", 00:29:49.816 "traddr": "10.0.0.2", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "4420", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.816 "hdgst": false, 00:29:49.816 "ddgst": false 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 },{ 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme2", 00:29:49.816 "trtype": "tcp", 00:29:49.816 "traddr": "10.0.0.2", 00:29:49.816 "adrfam": "ipv4", 00:29:49.816 "trsvcid": "4420", 00:29:49.816 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:49.816 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:49.816 "hdgst": false, 00:29:49.816 "ddgst": false 00:29:49.816 }, 00:29:49.816 "method": "bdev_nvme_attach_controller" 00:29:49.816 },{ 00:29:49.816 "params": { 00:29:49.816 "name": "Nvme3", 00:29:49.816 "trtype": "tcp", 00:29:49.817 "traddr": "10.0.0.2", 00:29:49.817 "adrfam": "ipv4", 00:29:49.817 "trsvcid": "4420", 00:29:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:49.817 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:49.817 "hdgst": false, 00:29:49.817 "ddgst": false 00:29:49.817 }, 00:29:49.817 "method": "bdev_nvme_attach_controller" 00:29:49.817 },{ 00:29:49.817 "params": { 00:29:49.817 "name": "Nvme4", 00:29:49.817 "trtype": "tcp", 00:29:49.817 "traddr": "10.0.0.2", 00:29:49.817 "adrfam": "ipv4", 00:29:49.817 "trsvcid": "4420", 00:29:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:49.817 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:49.817 "hdgst": false, 00:29:49.817 "ddgst": false 00:29:49.817 }, 00:29:49.817 "method": "bdev_nvme_attach_controller" 00:29:49.817 },{ 00:29:49.817 "params": { 00:29:49.817 "name": "Nvme5", 00:29:49.817 "trtype": "tcp", 00:29:49.817 "traddr": "10.0.0.2", 00:29:49.817 "adrfam": "ipv4", 00:29:49.817 "trsvcid": "4420", 00:29:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:49.817 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:49.817 "hdgst": false, 00:29:49.817 "ddgst": false 00:29:49.817 }, 00:29:49.817 "method": "bdev_nvme_attach_controller" 00:29:49.817 },{ 00:29:49.817 "params": { 00:29:49.817 "name": "Nvme6", 00:29:49.817 "trtype": "tcp", 00:29:49.817 "traddr": "10.0.0.2", 00:29:49.817 "adrfam": "ipv4", 00:29:49.817 "trsvcid": "4420", 00:29:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:49.817 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:49.817 "hdgst": false, 00:29:49.817 "ddgst": false 00:29:49.817 }, 00:29:49.817 "method": "bdev_nvme_attach_controller" 00:29:49.817 },{ 00:29:49.817 "params": { 00:29:49.817 "name": "Nvme7", 00:29:49.817 "trtype": "tcp", 00:29:49.817 "traddr": "10.0.0.2", 00:29:49.817 "adrfam": "ipv4", 00:29:49.817 "trsvcid": "4420", 00:29:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:49.817 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:49.817 "hdgst": false, 00:29:49.817 "ddgst": false 00:29:49.817 }, 00:29:49.817 "method": "bdev_nvme_attach_controller" 00:29:49.817 },{ 00:29:49.817 "params": { 00:29:49.817 "name": "Nvme8", 00:29:49.817 "trtype": "tcp", 00:29:49.817 "traddr": "10.0.0.2", 00:29:49.817 "adrfam": "ipv4", 00:29:49.817 "trsvcid": "4420", 00:29:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:49.817 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:49.817 "hdgst": false, 00:29:49.817 "ddgst": false 00:29:49.817 }, 00:29:49.817 "method": "bdev_nvme_attach_controller" 00:29:49.817 },{ 00:29:49.817 "params": { 00:29:49.817 "name": "Nvme9", 00:29:49.817 "trtype": "tcp", 00:29:49.817 "traddr": "10.0.0.2", 00:29:49.817 "adrfam": "ipv4", 00:29:49.817 "trsvcid": "4420", 00:29:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:49.817 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:49.817 "hdgst": false, 00:29:49.817 "ddgst": false 00:29:49.817 }, 00:29:49.817 "method": "bdev_nvme_attach_controller" 00:29:49.817 },{ 00:29:49.817 "params": { 00:29:49.817 "name": "Nvme10", 00:29:49.817 "trtype": "tcp", 00:29:49.817 "traddr": "10.0.0.2", 00:29:49.817 "adrfam": "ipv4", 00:29:49.817 "trsvcid": "4420", 00:29:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:49.817 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:49.817 "hdgst": false, 00:29:49.817 "ddgst": false 00:29:49.817 }, 00:29:49.817 "method": "bdev_nvme_attach_controller" 00:29:49.817 }' 00:29:49.817 [2024-12-10 12:32:56.640277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780432 ] 00:29:50.074 [2024-12-10 12:32:56.759564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.074 [2024-12-10 12:32:56.874824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.973 Running I/O for 10 seconds... 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:52.549 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3780116 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3780116 ']' 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3780116 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3780116 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3780116' 00:29:52.550 killing process with pid 3780116 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3780116 00:29:52.550 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3780116 00:29:52.550 [2024-12-10 12:32:59.317400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.317972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.320587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.550 [2024-12-10 12:32:59.320608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.550 [2024-12-10 12:32:59.320632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.550 [2024-12-10 12:32:59.320640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.551 [2024-12-10 12:32:59.320652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.320662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-10 12:32:59.320673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same id:0 cdw10:00000000 cdw11:00000000 00:29:52.551 with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same [2024-12-10 12:32:59.320685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(6) to be set 00:29:52.551 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.320696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.551 [2024-12-10 12:32:59.320706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.320715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.320997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.321187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:52.551 [2024-12-10 12:32:59.324547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.551 [2024-12-10 12:32:59.324581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.324621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.551 [2024-12-10 12:32:59.324632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.324649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.551 [2024-12-10 12:32:59.324659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.324671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.551 [2024-12-10 12:32:59.324681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.324692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.551 [2024-12-10 12:32:59.324702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.324713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.551 [2024-12-10 12:32:59.324722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.551 [2024-12-10 12:32:59.324733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.551 [2024-12-10 12:32:59.324743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.324985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.324995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.552 [2024-12-10 12:32:59.325591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.552 [2024-12-10 12:32:59.325603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.325944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.553 [2024-12-10 12:32:59.325953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.553 [2024-12-10 12:32:59.326534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.553 [2024-12-10 12:32:59.326961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.326969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.326977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.326985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.326992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.327293] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.554 [2024-12-10 12:32:59.329182] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.554 [2024-12-10 12:32:59.329228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:52.554 [2024-12-10 12:32:59.329297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:52.554 [2024-12-10 12:32:59.329731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.329999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.554 [2024-12-10 12:32:59.330263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.330271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.330279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.330288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.330296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.330307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.330315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.330324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.331198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.555 [2024-12-10 12:32:59.331234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:52.555 [2024-12-10 12:32:59.331249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.331288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.331462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.331578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.555 [2024-12-10 12:32:59.331649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.555 [2024-12-10 12:32:59.331661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.331683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:52.555 [2024-12-10 12:32:59.331994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.555 [2024-12-10 12:32:59.332421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 [2024-12-10 12:32:59.332576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same (9): Bad file descriptor 00:29:52.556 with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.332681] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.556 [2024-12-10 12:32:59.333060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:52.556 [2024-12-10 12:32:59.333085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:52.556 [2024-12-10 12:32:59.333098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:52.556 [2024-12-10 12:32:59.333112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:52.556 [2024-12-10 12:32:59.333764] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.556 [2024-12-10 12:32:59.335105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.556 [2024-12-10 12:32:59.335341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.556 [2024-12-10 12:32:59.335360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same [2024-12-10 12:32:59.335377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128with the state(6) to be set 00:29:52.556 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.556 [2024-12-10 12:32:59.335391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.556 [2024-12-10 12:32:59.335400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.556 [2024-12-10 12:32:59.335411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.556 [2024-12-10 12:32:59.335420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.556 [2024-12-10 12:32:59.335445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.556 [2024-12-10 12:32:59.335449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128[2024-12-10 12:32:59.335459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.556 with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same [2024-12-10 12:32:59.335470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:52.556 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.556 [2024-12-10 12:32:59.335480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.556 [2024-12-10 12:32:59.335490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.556 [2024-12-10 12:32:59.335501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128[2024-12-10 12:32:59.335510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.556 with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same [2024-12-10 12:32:59.335521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:52.556 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.556 [2024-12-10 12:32:59.335533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.556 [2024-12-10 12:32:59.335543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.556 [2024-12-10 12:32:59.335552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.556 [2024-12-10 12:32:59.335561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 12:32:59.335621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:52.557 [2024-12-10 12:32:59.335744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.335982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.335991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.557 [2024-12-10 12:32:59.336280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.557 [2024-12-10 12:32:59.336292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.558 [2024-12-10 12:32:59.336812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.558 [2024-12-10 12:32:59.336823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:52.558 [2024-12-10 12:32:59.339118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:52.558 [2024-12-10 12:32:59.339164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.558 [2024-12-10 12:32:59.339367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.339720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.340288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.559 [2024-12-10 12:32:59.340322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:52.559 [2024-12-10 12:32:59.340338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.340527] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.559 [2024-12-10 12:32:59.340686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:52.559 [2024-12-10 12:32:59.340864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:52.559 [2024-12-10 12:32:59.340902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:52.559 [2024-12-10 12:32:59.340914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:52.559 [2024-12-10 12:32:59.340926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:52.559 [2024-12-10 12:32:59.340942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:52.559 [2024-12-10 12:32:59.341308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.559 [2024-12-10 12:32:59.341332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:52.559 [2024-12-10 12:32:59.341345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.341362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:52.559 [2024-12-10 12:32:59.341412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.559 [2024-12-10 12:32:59.341428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.341441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.559 [2024-12-10 12:32:59.341452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.341463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.559 [2024-12-10 12:32:59.341474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.341485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.559 [2024-12-10 12:32:59.341498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.341512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.341546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.559 [2024-12-10 12:32:59.341560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.341571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.559 [2024-12-10 12:32:59.341581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.341592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.559 [2024-12-10 12:32:59.341603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.341614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.559 [2024-12-10 12:32:59.341623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.341633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:52.559 [2024-12-10 12:32:59.341672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:52.559 [2024-12-10 12:32:59.341696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:52.559 [2024-12-10 12:32:59.341899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:52.559 [2024-12-10 12:32:59.341983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.559 [2024-12-10 12:32:59.341999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.342023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.559 [2024-12-10 12:32:59.342034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.342048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.559 [2024-12-10 12:32:59.342057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.559 [2024-12-10 12:32:59.342070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.559 [2024-12-10 12:32:59.342080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.342907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.560 [2024-12-10 12:32:59.342917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.560 [2024-12-10 12:32:59.343654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.560 [2024-12-10 12:32:59.343679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.560 [2024-12-10 12:32:59.343688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.560 [2024-12-10 12:32:59.343697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.560 [2024-12-10 12:32:59.343705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.560 [2024-12-10 12:32:59.343713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.560 [2024-12-10 12:32:59.343723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.560 [2024-12-10 12:32:59.343731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.343999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.344209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.561 [2024-12-10 12:32:59.345935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.345942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.345950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.345959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.345969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.345977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.345985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.345994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.346205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:52.562 [2024-12-10 12:32:59.353821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.353843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.353858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.353875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.353889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.353905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.353919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.353936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.353949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.353965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.353979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.353995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.562 [2024-12-10 12:32:59.354424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.562 [2024-12-10 12:32:59.354437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.354453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.354468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.354484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.354497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.354513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.354527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.354541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d500 is same with the state(6) to be set 00:29:52.563 [2024-12-10 12:32:59.356312] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.563 [2024-12-10 12:32:59.356506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:52.563 [2024-12-10 12:32:59.356552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:52.563 [2024-12-10 12:32:59.356567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:52.563 [2024-12-10 12:32:59.356583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:52.563 [2024-12-10 12:32:59.356597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:52.563 [2024-12-10 12:32:59.356661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:52.563 [2024-12-10 12:32:59.356692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:52.563 [2024-12-10 12:32:59.356744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.563 [2024-12-10 12:32:59.356763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.356781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.563 [2024-12-10 12:32:59.356795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.356810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.563 [2024-12-10 12:32:59.356824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.356839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.563 [2024-12-10 12:32:59.356852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.356866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:52.563 [2024-12-10 12:32:59.356910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.563 [2024-12-10 12:32:59.356926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.356943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.563 [2024-12-10 12:32:59.356957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.356972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.563 [2024-12-10 12:32:59.356985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.357000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.563 [2024-12-10 12:32:59.357013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.357033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:52.563 [2024-12-10 12:32:59.357480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.563 [2024-12-10 12:32:59.357512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:52.563 [2024-12-10 12:32:59.357532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:52.563 [2024-12-10 12:32:59.358089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.563 [2024-12-10 12:32:59.358825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.563 [2024-12-10 12:32:59.358841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.358855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.358870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.358885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.358901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.358915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.358932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.358945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.358961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.358974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.358990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.359702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.359716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(6) to be set 00:29:52.564 [2024-12-10 12:32:59.361478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.564 [2024-12-10 12:32:59.361850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.564 [2024-12-10 12:32:59.361864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.361881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.361895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.361912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.361927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.361944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.361958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.361976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.361990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.362981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.362998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.363015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.363029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.363045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.363059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.363076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.363091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.363107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.565 [2024-12-10 12:32:59.363121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.565 [2024-12-10 12:32:59.363136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.363484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.363496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:29:52.566 [2024-12-10 12:32:59.365024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.365050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.365068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.365079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.365094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.365106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.365119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.365132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.365144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.365156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.365181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.566 [2024-12-10 12:32:59.365194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.566 [2024-12-10 12:32:59.365208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.831 [2024-12-10 12:32:59.365674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.831 [2024-12-10 12:32:59.365685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.365985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.365995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.832 [2024-12-10 12:32:59.366629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.832 [2024-12-10 12:32:59.366641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032eb80 is same with the state(6) to be set 00:29:52.832 [2024-12-10 12:32:59.368034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:52.832 [2024-12-10 12:32:59.368065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:52.833 [2024-12-10 12:32:59.368085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:52.833 [2024-12-10 12:32:59.368101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:52.833 [2024-12-10 12:32:59.368182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:52.833 [2024-12-10 12:32:59.368222] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:52.833 [2024-12-10 12:32:59.368255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:52.833 [2024-12-10 12:32:59.368285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:52.833 [2024-12-10 12:32:59.368313] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:52.833 [2024-12-10 12:32:59.368602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:52.833 [2024-12-10 12:32:59.368819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.833 [2024-12-10 12:32:59.368842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:52.833 [2024-12-10 12:32:59.368855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:52.833 [2024-12-10 12:32:59.369008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.833 [2024-12-10 12:32:59.369029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:52.833 [2024-12-10 12:32:59.369041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:52.833 [2024-12-10 12:32:59.369147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.833 [2024-12-10 12:32:59.369164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:52.833 [2024-12-10 12:32:59.369183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:52.833 [2024-12-10 12:32:59.369374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.833 [2024-12-10 12:32:59.369390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:52.833 [2024-12-10 12:32:59.369402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:52.833 [2024-12-10 12:32:59.369412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:52.833 [2024-12-10 12:32:59.369423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:52.833 [2024-12-10 12:32:59.369436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:52.833 [2024-12-10 12:32:59.369449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:52.833 [2024-12-10 12:32:59.370362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.370986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.370997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.371009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.371020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.371033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.371043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.371056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.371068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.371080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.833 [2024-12-10 12:32:59.371091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.833 [2024-12-10 12:32:59.371104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.371932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.371945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(6) to be set 00:29:52.834 [2024-12-10 12:32:59.373326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.373347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.373370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.373382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.373396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.373406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.373421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.373432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.834 [2024-12-10 12:32:59.373445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.834 [2024-12-10 12:32:59.373456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.373978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.373989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.835 [2024-12-10 12:32:59.374273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.835 [2024-12-10 12:32:59.374285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.836 [2024-12-10 12:32:59.374863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.836 [2024-12-10 12:32:59.374874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e400 is same with the state(6) to be set 00:29:52.836 [2024-12-10 12:32:59.376222] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.836 [2024-12-10 12:32:59.376690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:52.836 [2024-12-10 12:32:59.376718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:52.836 [2024-12-10 12:32:59.376921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.836 [2024-12-10 12:32:59.376940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:52.836 [2024-12-10 12:32:59.376953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:52.836 [2024-12-10 12:32:59.376968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:52.836 [2024-12-10 12:32:59.376983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:52.836 [2024-12-10 12:32:59.376998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:52.836 [2024-12-10 12:32:59.377015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:52.836 [2024-12-10 12:32:59.377579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.836 [2024-12-10 12:32:59.377603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328c80 with addr=10.0.0.2, port=4420 00:29:52.836 [2024-12-10 12:32:59.377615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:52.836 [2024-12-10 12:32:59.377776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.836 [2024-12-10 12:32:59.377792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000329680 with addr=10.0.0.2, port=4420 00:29:52.836 [2024-12-10 12:32:59.377803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:52.836 [2024-12-10 12:32:59.377816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:52.836 [2024-12-10 12:32:59.377829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:52.836 [2024-12-10 12:32:59.377839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:52.836 [2024-12-10 12:32:59.377850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:52.836 [2024-12-10 12:32:59.377860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:52.836 [2024-12-10 12:32:59.377873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:52.836 [2024-12-10 12:32:59.377883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:52.836 [2024-12-10 12:32:59.377893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:52.836 [2024-12-10 12:32:59.377902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:52.836 [2024-12-10 12:32:59.377913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:52.836 [2024-12-10 12:32:59.377921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:52.836 [2024-12-10 12:32:59.377931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:52.836 [2024-12-10 12:32:59.377940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:52.836 [2024-12-10 12:32:59.377950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:52.836 [2024-12-10 12:32:59.377958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:52.837 [2024-12-10 12:32:59.377968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:52.837 [2024-12-10 12:32:59.377977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:52.837 [2024-12-10 12:32:59.378778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.378802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.378821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.378833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.378850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.378861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.378874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.378885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.378899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.378910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.378922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.378941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.378953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.378965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.378977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.378987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.837 [2024-12-10 12:32:59.379704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.837 [2024-12-10 12:32:59.379717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.379980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.379992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.380289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.380300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(6) to be set 00:29:52.838 [2024-12-10 12:32:59.380618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:52.838 [2024-12-10 12:32:59.380634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:52.838 [2024-12-10 12:32:59.380646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:52.838 [2024-12-10 12:32:59.380656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:52.838 [2024-12-10 12:32:59.380666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:52.838 [2024-12-10 12:32:59.380676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:52.838 [2024-12-10 12:32:59.380700] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:52.838 [2024-12-10 12:32:59.381939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:52.838 [2024-12-10 12:32:59.381979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:52.838 [2024-12-10 12:32:59.381993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:52.838 [2024-12-10 12:32:59.382004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:52.838 [2024-12-10 12:32:59.382013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:52.838 [2024-12-10 12:32:59.382024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:52.838 [2024-12-10 12:32:59.382032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:52.838 [2024-12-10 12:32:59.382042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:52.838 [2024-12-10 12:32:59.382050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:52.838 [2024-12-10 12:32:59.382132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.382147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.382164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.382182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.382195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.382206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.382220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.382231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.382244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.382254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.382267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.382281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.382293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.838 [2024-12-10 12:32:59.382304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.838 [2024-12-10 12:32:59.382316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.382980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.382990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.839 [2024-12-10 12:32:59.383217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.839 [2024-12-10 12:32:59.383227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.840 [2024-12-10 12:32:59.383613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.840 [2024-12-10 12:32:59.383624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e680 is same with the state(6) to be set 00:29:52.840 [2024-12-10 12:32:59.384892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:52.840 [2024-12-10 12:32:59.384914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:52.840 [2024-12-10 12:32:59.384928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:52.840 [2024-12-10 12:32:59.384939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:52.840 [2024-12-10 12:32:59.384952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:52.840 [2024-12-10 12:32:59.384969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:52.840 task offset: 21376 on job bdev=Nvme2n1 fails 00:29:52.840 00:29:52.840 Latency(us) 00:29:52.840 [2024-12-10T11:32:59.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.840 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme1n1 ended in about 0.75 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme1n1 : 0.75 175.45 10.97 85.07 0.00 242626.82 18474.91 225693.50 00:29:52.840 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme2n1 ended in about 0.73 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme2n1 : 0.73 176.53 11.03 88.27 0.00 233002.34 4369.07 247663.66 00:29:52.840 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme3n1 ended in about 0.76 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme3n1 : 0.76 195.40 12.21 68.65 0.00 227013.27 20721.86 223696.21 00:29:52.840 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme4n1 ended in about 0.76 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme4n1 : 0.76 174.72 10.92 84.08 0.00 227867.26 10111.27 236678.58 00:29:52.840 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme5n1 ended in about 0.74 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme5n1 : 0.74 174.05 10.88 87.03 0.00 219714.64 19972.88 216705.71 00:29:52.840 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme6n1 ended in about 0.77 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme6n1 : 0.77 166.31 10.39 83.16 0.00 225522.51 19848.05 230686.72 00:29:52.840 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme7n1 ended in about 0.77 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme7n1 : 0.77 165.69 10.36 82.84 0.00 220966.85 28586.18 238675.87 00:29:52.840 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme8n1 ended in about 0.78 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme8n1 : 0.78 163.84 10.24 81.92 0.00 218294.04 16602.45 235679.94 00:29:52.840 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme9n1 ended in about 0.78 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme9n1 : 0.78 164.40 10.28 82.20 0.00 211761.25 17850.76 257650.10 00:29:52.840 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.840 Job: Nvme10n1 ended in about 0.76 seconds with error 00:29:52.840 Verification LBA range: start 0x0 length 0x400 00:29:52.840 Nvme10n1 : 0.76 83.73 5.23 83.73 0.00 302328.69 28586.18 275625.69 00:29:52.840 [2024-12-10T11:32:59.666Z] =================================================================================================================== 00:29:52.840 [2024-12-10T11:32:59.666Z] Total : 1640.13 102.51 826.94 0.00 230519.84 4369.07 275625.69 00:29:52.840 [2024-12-10 12:32:59.516293] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:52.841 [2024-12-10 12:32:59.516608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.516637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032aa80 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.516653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.517126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:52.841 [2024-12-10 12:32:59.517427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.517449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.517462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.517648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.517664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.517675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.517833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.517850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.517861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.518063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.518078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.518089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.518226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.518243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.518254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.518418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.518434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.518444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.518461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.518526] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:52.841 [2024-12-10 12:32:59.519268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.519295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032a080 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.519308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.519322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.519337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.519351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.519363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.519376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.519389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.519401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.519410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.519423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.519437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.519596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:52.841 [2024-12-10 12:32:59.519615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:52.841 [2024-12-10 12:32:59.519641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.519655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.519664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.519674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.519683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.519694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.519703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.519711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.519721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.519730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.519739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.519748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.519756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.519766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.519774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.519784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.519793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.519802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.519812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.519820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.519829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.519839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.519847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.519857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.519865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.520019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.520041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000329680 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.520052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.520155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.841 [2024-12-10 12:32:59.520175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328c80 with addr=10.0.0.2, port=4420 00:29:52.841 [2024-12-10 12:32:59.520186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:52.841 [2024-12-10 12:32:59.520196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.520205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.520215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.520226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.520267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.520282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:52.841 [2024-12-10 12:32:59.520318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.520329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.520340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.520348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:52.841 [2024-12-10 12:32:59.520358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:52.841 [2024-12-10 12:32:59.520367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:52.841 [2024-12-10 12:32:59.520376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:52.841 [2024-12-10 12:32:59.520392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:56.124 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3780432 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3780432 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3780432 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.691 rmmod nvme_tcp 00:29:56.691 rmmod nvme_fabrics 00:29:56.691 rmmod nvme_keyring 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3780116 ']' 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3780116 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3780116 ']' 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3780116 00:29:56.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3780116) - No such process 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3780116 is not found' 00:29:56.691 Process with pid 3780116 is not found 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.691 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.949 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.949 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:58.853 00:29:58.853 real 0m11.343s 00:29:58.853 user 0m33.107s 00:29:58.853 sys 0m1.559s 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:58.853 ************************************ 00:29:58.853 END TEST nvmf_shutdown_tc3 00:29:58.853 ************************************ 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:58.853 ************************************ 00:29:58.853 START TEST nvmf_shutdown_tc4 00:29:58.853 ************************************ 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.853 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:59.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:59.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:59.112 Found net devices under 0000:af:00.0: cvl_0_0 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:59.112 Found net devices under 0000:af:00.1: cvl_0_1 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.112 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.113 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:29:59.371 00:29:59.371 --- 10.0.0.2 ping statistics --- 00:29:59.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.371 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:29:59.371 00:29:59.371 --- 10.0.0.1 ping statistics --- 00:29:59.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.371 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3782278 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3782278 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3782278 ']' 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.371 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:59.371 [2024-12-10 12:33:06.081700] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:59.371 [2024-12-10 12:33:06.081792] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.629 [2024-12-10 12:33:06.199838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.629 [2024-12-10 12:33:06.305891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.629 [2024-12-10 12:33:06.305935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.629 [2024-12-10 12:33:06.305945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.629 [2024-12-10 12:33:06.305955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.629 [2024-12-10 12:33:06.305963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.629 [2024-12-10 12:33:06.308241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.629 [2024-12-10 12:33:06.308315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.629 [2024-12-10 12:33:06.308396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.629 [2024-12-10 12:33:06.308418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:00.195 [2024-12-10 12:33:06.918818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.195 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:00.453 Malloc1 00:30:00.453 [2024-12-10 12:33:07.084434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.453 Malloc2 00:30:00.453 Malloc3 00:30:00.711 Malloc4 00:30:00.711 Malloc5 00:30:00.970 Malloc6 00:30:00.970 Malloc7 00:30:00.970 Malloc8 00:30:01.227 Malloc9 00:30:01.227 Malloc10 00:30:01.228 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.228 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:01.228 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.228 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:01.228 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3782995 00:30:01.228 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:01.228 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:01.486 [2024-12-10 12:33:08.131396] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:06.755 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3782278 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3782278 ']' 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3782278 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3782278 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3782278' 00:30:06.756 killing process with pid 3782278 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3782278 00:30:06.756 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3782278 00:30:06.756 [2024-12-10 12:33:13.097420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.097550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 [2024-12-10 12:33:13.101034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.756 NVMe io qpair process completion error 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 [2024-12-10 12:33:13.104232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.104268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.104279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.104290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.104299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 [2024-12-10 12:33:13.104308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.104317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 [2024-12-10 12:33:13.104325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 [2024-12-10 12:33:13.104333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(6) to be set 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 Write completed with error (sct=0, sc=8) 00:30:06.756 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 [2024-12-10 12:33:13.106447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.757 NVMe io qpair process completion error 00:30:06.757 [2024-12-10 12:33:13.113526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 [2024-12-10 12:33:13.113807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same Write completed with error (sct=0, sc=8) 00:30:06.757 with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same Write completed with error (sct=0, sc=8) 00:30:06.757 with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.113851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 starting I/O failed: -6 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 Write completed with error (sct=0, sc=8) 00:30:06.757 [2024-12-10 12:33:13.114639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.757 starting I/O failed: -6 00:30:06.757 starting I/O failed: -6 00:30:06.757 starting I/O failed: -6 00:30:06.757 [2024-12-10 12:33:13.115661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.757 [2024-12-10 12:33:13.115691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.115702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.115711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.758 NVMe io qpair process completion error 00:30:06.758 [2024-12-10 12:33:13.115720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.115729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.115738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.115746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.115754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 [2024-12-10 12:33:13.122152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.122197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.122209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 [2024-12-10 12:33:13.122219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.122228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 [2024-12-10 12:33:13.123144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.758 starting I/O failed: -6 00:30:06.758 starting I/O failed: -6 00:30:06.758 [2024-12-10 12:33:13.123417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.123445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.123455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.123464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.123474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.123483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 [2024-12-10 12:33:13.124587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:30:06.758 starting I/O failed: -6 00:30:06.758 [2024-12-10 12:33:13.124620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.124631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.124641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 [2024-12-10 12:33:13.124649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:30:06.758 [2024-12-10 12:33:13.124659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 [2024-12-10 12:33:13.124668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 [2024-12-10 12:33:13.124957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.758 Write completed with error (sct=0, sc=8) 00:30:06.758 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 [2024-12-10 12:33:13.127618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 [2024-12-10 12:33:13.138279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.759 NVMe io qpair process completion error 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 starting I/O failed: -6 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.759 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 [2024-12-10 12:33:13.139826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 [2024-12-10 12:33:13.141777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.760 starting I/O failed: -6 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 [2024-12-10 12:33:13.144419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.760 Write completed with error (sct=0, sc=8) 00:30:06.760 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 [2024-12-10 12:33:13.159589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.761 NVMe io qpair process completion error 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 [2024-12-10 12:33:13.161276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.761 starting I/O failed: -6 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 starting I/O failed: -6 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.761 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 [2024-12-10 12:33:13.162976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 [2024-12-10 12:33:13.165397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.762 Write completed with error (sct=0, sc=8) 00:30:06.762 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 starting I/O failed: -6 00:30:06.763 [2024-12-10 12:33:13.179606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.763 NVMe io qpair process completion error 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.763 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 [2024-12-10 12:33:13.207825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.764 NVMe io qpair process completion error 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 [2024-12-10 12:33:13.209356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 [2024-12-10 12:33:13.210988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.764 Write completed with error (sct=0, sc=8) 00:30:06.764 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 [2024-12-10 12:33:13.213450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 starting I/O failed: -6 00:30:06.765 [2024-12-10 12:33:13.229929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.765 NVMe io qpair process completion error 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.765 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 [2024-12-10 12:33:13.231591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 [2024-12-10 12:33:13.233379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.766 Write completed with error (sct=0, sc=8) 00:30:06.766 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 [2024-12-10 12:33:13.235918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 starting I/O failed: -6 00:30:06.767 [2024-12-10 12:33:13.250101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.767 NVMe io qpair process completion error 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.767 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 [2024-12-10 12:33:13.266878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.768 NVMe io qpair process completion error 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 [2024-12-10 12:33:13.268531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 [2024-12-10 12:33:13.270429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.768 Write completed with error (sct=0, sc=8) 00:30:06.768 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 [2024-12-10 12:33:13.272866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 [2024-12-10 12:33:13.287071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.769 NVMe io qpair process completion error 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 [2024-12-10 12:33:13.288706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.769 starting I/O failed: -6 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 Write completed with error (sct=0, sc=8) 00:30:06.769 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 [2024-12-10 12:33:13.290556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 [2024-12-10 12:33:13.293257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 Write completed with error (sct=0, sc=8) 00:30:06.770 starting I/O failed: -6 00:30:06.770 [2024-12-10 12:33:13.307357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.771 NVMe io qpair process completion error 00:30:06.771 Initializing NVMe Controllers 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:06.771 Controller IO queue size 128, less than required. 00:30:06.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:06.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:06.771 Initialization complete. Launching workers. 00:30:06.771 ======================================================== 00:30:06.771 Latency(us) 00:30:06.771 Device Information : IOPS MiB/s Average min max 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1856.80 79.78 68944.19 1463.67 232283.30 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1831.44 78.69 67366.80 1345.14 166092.68 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1806.71 77.63 68400.27 863.67 159535.58 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1824.58 78.40 67904.75 1335.75 151008.25 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1799.64 77.33 68691.77 1024.11 169972.59 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1804.00 77.52 68482.37 1572.42 136024.04 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1730.01 74.34 71925.79 1206.86 163426.23 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1750.38 75.21 71289.56 1231.65 209297.85 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1757.03 75.50 70967.41 1036.72 203803.22 00:30:06.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1759.73 75.61 71185.65 1224.24 240748.13 00:30:06.771 ======================================================== 00:30:06.771 Total : 17920.31 770.01 69485.87 863.67 240748.13 00:30:06.771 00:30:06.771 [2024-12-10 12:33:13.340736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.340797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f200 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.340838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fc00 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.340878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.340919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001de00 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.340958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e300 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.340999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.341042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020b00 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.341082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:30:06.771 [2024-12-10 12:33:13.341124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f700 is same with the state(6) to be set 00:30:06.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:10.055 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3782995 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3782995 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3782995 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.623 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.624 rmmod nvme_tcp 00:30:10.624 rmmod nvme_fabrics 00:30:10.624 rmmod nvme_keyring 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3782278 ']' 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3782278 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3782278 ']' 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3782278 00:30:10.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3782278) - No such process 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3782278 is not found' 00:30:10.624 Process with pid 3782278 is not found 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.624 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.525 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.525 00:30:12.525 real 0m13.680s 00:30:12.525 user 0m39.271s 00:30:12.525 sys 0m5.051s 00:30:12.525 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.525 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:12.525 ************************************ 00:30:12.525 END TEST nvmf_shutdown_tc4 00:30:12.525 ************************************ 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:12.784 00:30:12.784 real 0m58.386s 00:30:12.784 user 2m50.958s 00:30:12.784 sys 0m14.433s 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:12.784 ************************************ 00:30:12.784 END TEST nvmf_shutdown 00:30:12.784 ************************************ 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:12.784 ************************************ 00:30:12.784 START TEST nvmf_nsid 00:30:12.784 ************************************ 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:12.784 * Looking for test storage... 00:30:12.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:30:12.784 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:13.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.044 --rc genhtml_branch_coverage=1 00:30:13.044 --rc genhtml_function_coverage=1 00:30:13.044 --rc genhtml_legend=1 00:30:13.044 --rc geninfo_all_blocks=1 00:30:13.044 --rc geninfo_unexecuted_blocks=1 00:30:13.044 00:30:13.044 ' 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:13.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.044 --rc genhtml_branch_coverage=1 00:30:13.044 --rc genhtml_function_coverage=1 00:30:13.044 --rc genhtml_legend=1 00:30:13.044 --rc geninfo_all_blocks=1 00:30:13.044 --rc geninfo_unexecuted_blocks=1 00:30:13.044 00:30:13.044 ' 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:13.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.044 --rc genhtml_branch_coverage=1 00:30:13.044 --rc genhtml_function_coverage=1 00:30:13.044 --rc genhtml_legend=1 00:30:13.044 --rc geninfo_all_blocks=1 00:30:13.044 --rc geninfo_unexecuted_blocks=1 00:30:13.044 00:30:13.044 ' 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:13.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.044 --rc genhtml_branch_coverage=1 00:30:13.044 --rc genhtml_function_coverage=1 00:30:13.044 --rc genhtml_legend=1 00:30:13.044 --rc geninfo_all_blocks=1 00:30:13.044 --rc geninfo_unexecuted_blocks=1 00:30:13.044 00:30:13.044 ' 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.044 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:13.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.045 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:18.371 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.371 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.371 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.371 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.371 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.371 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.371 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.371 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:18.372 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:18.372 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:18.372 Found net devices under 0000:af:00.0: cvl_0_0 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:18.372 Found net devices under 0000:af:00.1: cvl_0_1 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.372 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.372 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.372 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.372 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.372 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:30:18.629 00:30:18.629 --- 10.0.0.2 ping statistics --- 00:30:18.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.629 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:30:18.629 00:30:18.629 --- 10.0.0.1 ping statistics --- 00:30:18.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.629 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:18.629 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3787901 00:30:18.630 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3787901 00:30:18.630 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:18.630 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3787901 ']' 00:30:18.630 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.630 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.630 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.630 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.630 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:18.630 [2024-12-10 12:33:25.373698] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:18.630 [2024-12-10 12:33:25.373786] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.888 [2024-12-10 12:33:25.490588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.888 [2024-12-10 12:33:25.595246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.888 [2024-12-10 12:33:25.595291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.888 [2024-12-10 12:33:25.595302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.888 [2024-12-10 12:33:25.595313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.888 [2024-12-10 12:33:25.595320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.888 [2024-12-10 12:33:25.596735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3787932 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=9b2c6421-cf8f-449b-98f5-3eaac42cd88f 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a92d32e4-b42d-4d8f-a2c5-d9f52271ef37 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2dc3bab7-6280-4a46-9f1a-278bb5926534 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.454 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:19.454 null0 00:30:19.454 null1 00:30:19.454 null2 00:30:19.454 [2024-12-10 12:33:26.257703] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.713 [2024-12-10 12:33:26.281919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.713 [2024-12-10 12:33:26.281952] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:19.713 [2024-12-10 12:33:26.282033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787932 ] 00:30:19.713 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.713 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3787932 /var/tmp/tgt2.sock 00:30:19.713 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3787932 ']' 00:30:19.713 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:19.713 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.713 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:19.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:19.713 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.713 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:19.713 [2024-12-10 12:33:26.395017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.713 [2024-12-10 12:33:26.505294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.647 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.647 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:20.647 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:20.905 [2024-12-10 12:33:27.638652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.905 [2024-12-10 12:33:27.654774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:20.905 nvme0n1 nvme0n2 00:30:20.905 nvme1n1 00:30:20.905 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:20.905 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:20.905 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:22.277 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 9b2c6421-cf8f-449b-98f5-3eaac42cd88f 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9b2c6421cf8f449b98f53eaac42cd88f 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9B2C6421CF8F449B98F53EAAC42CD88F 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 9B2C6421CF8F449B98F53EAAC42CD88F == \9\B\2\C\6\4\2\1\C\F\8\F\4\4\9\B\9\8\F\5\3\E\A\A\C\4\2\C\D\8\8\F ]] 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a92d32e4-b42d-4d8f-a2c5-d9f52271ef37 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:23.237 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a92d32e4b42d4d8fa2c5d9f52271ef37 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A92D32E4B42D4D8FA2C5D9F52271EF37 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A92D32E4B42D4D8FA2C5D9F52271EF37 == \A\9\2\D\3\2\E\4\B\4\2\D\4\D\8\F\A\2\C\5\D\9\F\5\2\2\7\1\E\F\3\7 ]] 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2dc3bab7-6280-4a46-9f1a-278bb5926534 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:23.238 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:23.238 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2dc3bab762804a469f1a278bb5926534 00:30:23.238 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2DC3BAB762804A469F1A278BB5926534 00:30:23.238 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2DC3BAB762804A469F1A278BB5926534 == \2\D\C\3\B\A\B\7\6\2\8\0\4\A\4\6\9\F\1\A\2\7\8\B\B\5\9\2\6\5\3\4 ]] 00:30:23.238 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3787932 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3787932 ']' 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3787932 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3787932 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3787932' 00:30:23.804 killing process with pid 3787932 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3787932 00:30:23.804 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3787932 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.333 rmmod nvme_tcp 00:30:26.333 rmmod nvme_fabrics 00:30:26.333 rmmod nvme_keyring 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3787901 ']' 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3787901 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3787901 ']' 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3787901 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3787901 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3787901' 00:30:26.333 killing process with pid 3787901 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3787901 00:30:26.333 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3787901 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.267 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.800 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:29.800 00:30:29.800 real 0m16.539s 00:30:29.800 user 0m17.061s 00:30:29.800 sys 0m5.393s 00:30:29.800 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.800 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:29.800 ************************************ 00:30:29.800 END TEST nvmf_nsid 00:30:29.800 ************************************ 00:30:29.800 12:33:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:29.800 00:30:29.800 real 18m43.465s 00:30:29.800 user 50m2.455s 00:30:29.800 sys 3m58.679s 00:30:29.800 12:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.800 12:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:29.800 ************************************ 00:30:29.800 END TEST nvmf_target_extra 00:30:29.800 ************************************ 00:30:29.800 12:33:36 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:29.800 12:33:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:29.800 12:33:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.800 12:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.800 ************************************ 00:30:29.800 START TEST nvmf_host 00:30:29.800 ************************************ 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:29.800 * Looking for test storage... 00:30:29.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:29.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.800 --rc genhtml_branch_coverage=1 00:30:29.800 --rc genhtml_function_coverage=1 00:30:29.800 --rc genhtml_legend=1 00:30:29.800 --rc geninfo_all_blocks=1 00:30:29.800 --rc geninfo_unexecuted_blocks=1 00:30:29.800 00:30:29.800 ' 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:29.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.800 --rc genhtml_branch_coverage=1 00:30:29.800 --rc genhtml_function_coverage=1 00:30:29.800 --rc genhtml_legend=1 00:30:29.800 --rc geninfo_all_blocks=1 00:30:29.800 --rc geninfo_unexecuted_blocks=1 00:30:29.800 00:30:29.800 ' 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:29.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.800 --rc genhtml_branch_coverage=1 00:30:29.800 --rc genhtml_function_coverage=1 00:30:29.800 --rc genhtml_legend=1 00:30:29.800 --rc geninfo_all_blocks=1 00:30:29.800 --rc geninfo_unexecuted_blocks=1 00:30:29.800 00:30:29.800 ' 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:29.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.800 --rc genhtml_branch_coverage=1 00:30:29.800 --rc genhtml_function_coverage=1 00:30:29.800 --rc genhtml_legend=1 00:30:29.800 --rc geninfo_all_blocks=1 00:30:29.800 --rc geninfo_unexecuted_blocks=1 00:30:29.800 00:30:29.800 ' 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.800 12:33:36 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:29.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.801 ************************************ 00:30:29.801 START TEST nvmf_multicontroller 00:30:29.801 ************************************ 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:29.801 * Looking for test storage... 00:30:29.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:29.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.801 --rc genhtml_branch_coverage=1 00:30:29.801 --rc genhtml_function_coverage=1 00:30:29.801 --rc genhtml_legend=1 00:30:29.801 --rc geninfo_all_blocks=1 00:30:29.801 --rc geninfo_unexecuted_blocks=1 00:30:29.801 00:30:29.801 ' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:29.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.801 --rc genhtml_branch_coverage=1 00:30:29.801 --rc genhtml_function_coverage=1 00:30:29.801 --rc genhtml_legend=1 00:30:29.801 --rc geninfo_all_blocks=1 00:30:29.801 --rc geninfo_unexecuted_blocks=1 00:30:29.801 00:30:29.801 ' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:29.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.801 --rc genhtml_branch_coverage=1 00:30:29.801 --rc genhtml_function_coverage=1 00:30:29.801 --rc genhtml_legend=1 00:30:29.801 --rc geninfo_all_blocks=1 00:30:29.801 --rc geninfo_unexecuted_blocks=1 00:30:29.801 00:30:29.801 ' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:29.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.801 --rc genhtml_branch_coverage=1 00:30:29.801 --rc genhtml_function_coverage=1 00:30:29.801 --rc genhtml_legend=1 00:30:29.801 --rc geninfo_all_blocks=1 00:30:29.801 --rc geninfo_unexecuted_blocks=1 00:30:29.801 00:30:29.801 ' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.801 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:29.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:29.802 12:33:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.070 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:35.070 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:35.071 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:35.071 Found net devices under 0000:af:00.0: cvl_0_0 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:35.071 Found net devices under 0000:af:00.1: cvl_0_1 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.071 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.329 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.329 12:33:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.329 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.329 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.329 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.329 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.329 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.329 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:30:35.329 00:30:35.329 --- 10.0.0.2 ping statistics --- 00:30:35.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.329 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:30:35.329 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:30:35.587 00:30:35.587 --- 10.0.0.1 ping statistics --- 00:30:35.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.587 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:35.587 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3792827 00:30:35.588 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3792827 00:30:35.588 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:35.588 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3792827 ']' 00:30:35.588 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.588 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.588 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.588 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.588 12:33:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:35.588 [2024-12-10 12:33:42.286199] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:35.588 [2024-12-10 12:33:42.286302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.588 [2024-12-10 12:33:42.405604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.845 [2024-12-10 12:33:42.512520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.845 [2024-12-10 12:33:42.512564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.845 [2024-12-10 12:33:42.512574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.845 [2024-12-10 12:33:42.512584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.846 [2024-12-10 12:33:42.512593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.846 [2024-12-10 12:33:42.514936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.846 [2024-12-10 12:33:42.515004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.846 [2024-12-10 12:33:42.515013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.411 [2024-12-10 12:33:43.134340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.411 Malloc0 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.411 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.668 [2024-12-10 12:33:43.238561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.668 [2024-12-10 12:33:43.246463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.668 Malloc1 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.668 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3792966 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3792966 /var/tmp/bdevperf.sock 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3792966 ']' 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:36.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.669 12:33:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.601 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.601 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:37.601 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:37.601 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.601 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.859 NVMe0n1 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.859 1 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.859 request: 00:30:37.859 { 00:30:37.859 "name": "NVMe0", 00:30:37.859 "trtype": "tcp", 00:30:37.859 "traddr": "10.0.0.2", 00:30:37.859 "adrfam": "ipv4", 00:30:37.859 "trsvcid": "4420", 00:30:37.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.859 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:37.859 "hostaddr": "10.0.0.1", 00:30:37.859 "prchk_reftag": false, 00:30:37.859 "prchk_guard": false, 00:30:37.859 "hdgst": false, 00:30:37.859 "ddgst": false, 00:30:37.859 "allow_unrecognized_csi": false, 00:30:37.859 "method": "bdev_nvme_attach_controller", 00:30:37.859 "req_id": 1 00:30:37.859 } 00:30:37.859 Got JSON-RPC error response 00:30:37.859 response: 00:30:37.859 { 00:30:37.859 "code": -114, 00:30:37.859 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:37.859 } 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.859 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.860 request: 00:30:37.860 { 00:30:37.860 "name": "NVMe0", 00:30:37.860 "trtype": "tcp", 00:30:37.860 "traddr": "10.0.0.2", 00:30:37.860 "adrfam": "ipv4", 00:30:37.860 "trsvcid": "4420", 00:30:37.860 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:37.860 "hostaddr": "10.0.0.1", 00:30:37.860 "prchk_reftag": false, 00:30:37.860 "prchk_guard": false, 00:30:37.860 "hdgst": false, 00:30:37.860 "ddgst": false, 00:30:37.860 "allow_unrecognized_csi": false, 00:30:37.860 "method": "bdev_nvme_attach_controller", 00:30:37.860 "req_id": 1 00:30:37.860 } 00:30:37.860 Got JSON-RPC error response 00:30:37.860 response: 00:30:37.860 { 00:30:37.860 "code": -114, 00:30:37.860 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:37.860 } 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.860 request: 00:30:37.860 { 00:30:37.860 "name": "NVMe0", 00:30:37.860 "trtype": "tcp", 00:30:37.860 "traddr": "10.0.0.2", 00:30:37.860 "adrfam": "ipv4", 00:30:37.860 "trsvcid": "4420", 00:30:37.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.860 "hostaddr": "10.0.0.1", 00:30:37.860 "prchk_reftag": false, 00:30:37.860 "prchk_guard": false, 00:30:37.860 "hdgst": false, 00:30:37.860 "ddgst": false, 00:30:37.860 "multipath": "disable", 00:30:37.860 "allow_unrecognized_csi": false, 00:30:37.860 "method": "bdev_nvme_attach_controller", 00:30:37.860 "req_id": 1 00:30:37.860 } 00:30:37.860 Got JSON-RPC error response 00:30:37.860 response: 00:30:37.860 { 00:30:37.860 "code": -114, 00:30:37.860 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:37.860 } 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.860 request: 00:30:37.860 { 00:30:37.860 "name": "NVMe0", 00:30:37.860 "trtype": "tcp", 00:30:37.860 "traddr": "10.0.0.2", 00:30:37.860 "adrfam": "ipv4", 00:30:37.860 "trsvcid": "4420", 00:30:37.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.860 "hostaddr": "10.0.0.1", 00:30:37.860 "prchk_reftag": false, 00:30:37.860 "prchk_guard": false, 00:30:37.860 "hdgst": false, 00:30:37.860 "ddgst": false, 00:30:37.860 "multipath": "failover", 00:30:37.860 "allow_unrecognized_csi": false, 00:30:37.860 "method": "bdev_nvme_attach_controller", 00:30:37.860 "req_id": 1 00:30:37.860 } 00:30:37.860 Got JSON-RPC error response 00:30:37.860 response: 00:30:37.860 { 00:30:37.860 "code": -114, 00:30:37.860 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:37.860 } 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.860 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.118 NVMe0n1 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.118 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:38.118 12:33:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:39.490 { 00:30:39.490 "results": [ 00:30:39.490 { 00:30:39.490 "job": "NVMe0n1", 00:30:39.490 "core_mask": "0x1", 00:30:39.490 "workload": "write", 00:30:39.490 "status": "finished", 00:30:39.490 "queue_depth": 128, 00:30:39.490 "io_size": 4096, 00:30:39.490 "runtime": 1.003507, 00:30:39.490 "iops": 21646.08717228679, 00:30:39.490 "mibps": 84.55502801674527, 00:30:39.490 "io_failed": 0, 00:30:39.490 "io_timeout": 0, 00:30:39.490 "avg_latency_us": 5905.8691436814115, 00:30:39.490 "min_latency_us": 3635.687619047619, 00:30:39.490 "max_latency_us": 13044.784761904762 00:30:39.490 } 00:30:39.490 ], 00:30:39.490 "core_count": 1 00:30:39.490 } 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3792966 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3792966 ']' 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3792966 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792966 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792966' 00:30:39.490 killing process with pid 3792966 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3792966 00:30:39.490 12:33:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3792966 00:30:40.423 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:40.423 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.423 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:40.423 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.423 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:40.424 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:40.424 [2024-12-10 12:33:43.430667] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:40.424 [2024-12-10 12:33:43.430760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792966 ] 00:30:40.424 [2024-12-10 12:33:43.546931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.424 [2024-12-10 12:33:43.651770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.424 [2024-12-10 12:33:44.881222] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 2bf4141e-8bd3-4793-beb2-c9701ce51759 already exists 00:30:40.424 [2024-12-10 12:33:44.881269] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:2bf4141e-8bd3-4793-beb2-c9701ce51759 alias for bdev NVMe1n1 00:30:40.424 [2024-12-10 12:33:44.881283] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:40.424 Running I/O for 1 seconds... 00:30:40.424 21594.00 IOPS, 84.35 MiB/s 00:30:40.424 Latency(us) 00:30:40.424 [2024-12-10T11:33:47.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.424 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:40.424 NVMe0n1 : 1.00 21646.09 84.56 0.00 0.00 5905.87 3635.69 13044.78 00:30:40.424 [2024-12-10T11:33:47.250Z] =================================================================================================================== 00:30:40.424 [2024-12-10T11:33:47.250Z] Total : 21646.09 84.56 0.00 0.00 5905.87 3635.69 13044.78 00:30:40.424 Received shutdown signal, test time was about 1.000000 seconds 00:30:40.424 00:30:40.424 Latency(us) 00:30:40.424 [2024-12-10T11:33:47.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.424 [2024-12-10T11:33:47.250Z] =================================================================================================================== 00:30:40.424 [2024-12-10T11:33:47.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:40.424 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.424 rmmod nvme_tcp 00:30:40.424 rmmod nvme_fabrics 00:30:40.424 rmmod nvme_keyring 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3792827 ']' 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3792827 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3792827 ']' 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3792827 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792827 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792827' 00:30:40.424 killing process with pid 3792827 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3792827 00:30:40.424 12:33:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3792827 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.323 12:33:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.225 12:33:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.225 00:30:44.225 real 0m14.427s 00:30:44.225 user 0m23.427s 00:30:44.225 sys 0m5.212s 00:30:44.225 12:33:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.225 12:33:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.225 ************************************ 00:30:44.225 END TEST nvmf_multicontroller 00:30:44.225 ************************************ 00:30:44.225 12:33:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:44.225 12:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:44.225 12:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.226 ************************************ 00:30:44.226 START TEST nvmf_aer 00:30:44.226 ************************************ 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:44.226 * Looking for test storage... 00:30:44.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:44.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.226 --rc genhtml_branch_coverage=1 00:30:44.226 --rc genhtml_function_coverage=1 00:30:44.226 --rc genhtml_legend=1 00:30:44.226 --rc geninfo_all_blocks=1 00:30:44.226 --rc geninfo_unexecuted_blocks=1 00:30:44.226 00:30:44.226 ' 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:44.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.226 --rc genhtml_branch_coverage=1 00:30:44.226 --rc genhtml_function_coverage=1 00:30:44.226 --rc genhtml_legend=1 00:30:44.226 --rc geninfo_all_blocks=1 00:30:44.226 --rc geninfo_unexecuted_blocks=1 00:30:44.226 00:30:44.226 ' 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:44.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.226 --rc genhtml_branch_coverage=1 00:30:44.226 --rc genhtml_function_coverage=1 00:30:44.226 --rc genhtml_legend=1 00:30:44.226 --rc geninfo_all_blocks=1 00:30:44.226 --rc geninfo_unexecuted_blocks=1 00:30:44.226 00:30:44.226 ' 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:44.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.226 --rc genhtml_branch_coverage=1 00:30:44.226 --rc genhtml_function_coverage=1 00:30:44.226 --rc genhtml_legend=1 00:30:44.226 --rc geninfo_all_blocks=1 00:30:44.226 --rc geninfo_unexecuted_blocks=1 00:30:44.226 00:30:44.226 ' 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:44.226 12:33:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.226 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.226 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.226 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.226 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.226 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.226 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.226 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.226 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:44.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.227 12:33:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:49.494 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:49.494 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:49.494 Found net devices under 0000:af:00.0: cvl_0_0 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:49.494 Found net devices under 0000:af:00.1: cvl_0_1 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.494 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.495 12:33:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:30:49.495 00:30:49.495 --- 10.0.0.2 ping statistics --- 00:30:49.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.495 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:30:49.495 00:30:49.495 --- 10.0.0.1 ping statistics --- 00:30:49.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.495 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3797220 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3797220 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3797220 ']' 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.495 12:33:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.753 [2024-12-10 12:33:56.368704] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:49.753 [2024-12-10 12:33:56.368796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.753 [2024-12-10 12:33:56.486666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.012 [2024-12-10 12:33:56.597925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.012 [2024-12-10 12:33:56.597967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.012 [2024-12-10 12:33:56.597977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.012 [2024-12-10 12:33:56.597988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.012 [2024-12-10 12:33:56.597996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.012 [2024-12-10 12:33:56.600527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.012 [2024-12-10 12:33:56.600602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.012 [2024-12-10 12:33:56.600618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.012 [2024-12-10 12:33:56.600629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.578 [2024-12-10 12:33:57.219956] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.578 Malloc0 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.578 [2024-12-10 12:33:57.353680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.578 [ 00:30:50.578 { 00:30:50.578 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:50.578 "subtype": "Discovery", 00:30:50.578 "listen_addresses": [], 00:30:50.578 "allow_any_host": true, 00:30:50.578 "hosts": [] 00:30:50.578 }, 00:30:50.578 { 00:30:50.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.578 "subtype": "NVMe", 00:30:50.578 "listen_addresses": [ 00:30:50.578 { 00:30:50.578 "trtype": "TCP", 00:30:50.578 "adrfam": "IPv4", 00:30:50.578 "traddr": "10.0.0.2", 00:30:50.578 "trsvcid": "4420" 00:30:50.578 } 00:30:50.578 ], 00:30:50.578 "allow_any_host": true, 00:30:50.578 "hosts": [], 00:30:50.578 "serial_number": "SPDK00000000000001", 00:30:50.578 "model_number": "SPDK bdev Controller", 00:30:50.578 "max_namespaces": 2, 00:30:50.578 "min_cntlid": 1, 00:30:50.578 "max_cntlid": 65519, 00:30:50.578 "namespaces": [ 00:30:50.578 { 00:30:50.578 "nsid": 1, 00:30:50.578 "bdev_name": "Malloc0", 00:30:50.578 "name": "Malloc0", 00:30:50.578 "nguid": "465C32D8A3E544FFB328E8BC3DAA64AC", 00:30:50.578 "uuid": "465c32d8-a3e5-44ff-b328-e8bc3daa64ac" 00:30:50.578 } 00:30:50.578 ] 00:30:50.578 } 00:30:50.578 ] 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3797306 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:50.578 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:50.836 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:50.836 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:50.836 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:50.836 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:50.836 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:50.836 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:50.836 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:50.836 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.095 Malloc1 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.095 [ 00:30:51.095 { 00:30:51.095 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:51.095 "subtype": "Discovery", 00:30:51.095 "listen_addresses": [], 00:30:51.095 "allow_any_host": true, 00:30:51.095 "hosts": [] 00:30:51.095 }, 00:30:51.095 { 00:30:51.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.095 "subtype": "NVMe", 00:30:51.095 "listen_addresses": [ 00:30:51.095 { 00:30:51.095 "trtype": "TCP", 00:30:51.095 "adrfam": "IPv4", 00:30:51.095 "traddr": "10.0.0.2", 00:30:51.095 "trsvcid": "4420" 00:30:51.095 } 00:30:51.095 ], 00:30:51.095 "allow_any_host": true, 00:30:51.095 "hosts": [], 00:30:51.095 "serial_number": "SPDK00000000000001", 00:30:51.095 "model_number": "SPDK bdev Controller", 00:30:51.095 "max_namespaces": 2, 00:30:51.095 "min_cntlid": 1, 00:30:51.095 "max_cntlid": 65519, 00:30:51.095 "namespaces": [ 00:30:51.095 { 00:30:51.095 "nsid": 1, 00:30:51.095 "bdev_name": "Malloc0", 00:30:51.095 "name": "Malloc0", 00:30:51.095 "nguid": "465C32D8A3E544FFB328E8BC3DAA64AC", 00:30:51.095 "uuid": "465c32d8-a3e5-44ff-b328-e8bc3daa64ac" 00:30:51.095 }, 00:30:51.095 { 00:30:51.095 "nsid": 2, 00:30:51.095 "bdev_name": "Malloc1", 00:30:51.095 "name": "Malloc1", 00:30:51.095 "nguid": "FFFF12185D72494989F64C2B76EC63B1", 00:30:51.095 "uuid": "ffff1218-5d72-4949-89f6-4c2b76ec63b1" 00:30:51.095 } 00:30:51.095 ] 00:30:51.095 } 00:30:51.095 ] 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3797306 00:30:51.095 Asynchronous Event Request test 00:30:51.095 Attaching to 10.0.0.2 00:30:51.095 Attached to 10.0.0.2 00:30:51.095 Registering asynchronous event callbacks... 00:30:51.095 Starting namespace attribute notice tests for all controllers... 00:30:51.095 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:51.095 aer_cb - Changed Namespace 00:30:51.095 Cleaning up... 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.095 12:33:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.354 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.354 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:51.354 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.354 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.612 rmmod nvme_tcp 00:30:51.612 rmmod nvme_fabrics 00:30:51.612 rmmod nvme_keyring 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3797220 ']' 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3797220 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3797220 ']' 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3797220 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3797220 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3797220' 00:30:51.612 killing process with pid 3797220 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3797220 00:30:51.612 12:33:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3797220 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.988 12:33:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.022 12:34:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:55.022 00:30:55.022 real 0m10.817s 00:30:55.022 user 0m12.390s 00:30:55.022 sys 0m4.556s 00:30:55.022 12:34:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:55.023 ************************************ 00:30:55.023 END TEST nvmf_aer 00:30:55.023 ************************************ 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.023 ************************************ 00:30:55.023 START TEST nvmf_async_init 00:30:55.023 ************************************ 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:55.023 * Looking for test storage... 00:30:55.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:55.023 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:55.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.282 --rc genhtml_branch_coverage=1 00:30:55.282 --rc genhtml_function_coverage=1 00:30:55.282 --rc genhtml_legend=1 00:30:55.282 --rc geninfo_all_blocks=1 00:30:55.282 --rc geninfo_unexecuted_blocks=1 00:30:55.282 00:30:55.282 ' 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:55.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.282 --rc genhtml_branch_coverage=1 00:30:55.282 --rc genhtml_function_coverage=1 00:30:55.282 --rc genhtml_legend=1 00:30:55.282 --rc geninfo_all_blocks=1 00:30:55.282 --rc geninfo_unexecuted_blocks=1 00:30:55.282 00:30:55.282 ' 00:30:55.282 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:55.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.282 --rc genhtml_branch_coverage=1 00:30:55.282 --rc genhtml_function_coverage=1 00:30:55.282 --rc genhtml_legend=1 00:30:55.282 --rc geninfo_all_blocks=1 00:30:55.283 --rc geninfo_unexecuted_blocks=1 00:30:55.283 00:30:55.283 ' 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:55.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.283 --rc genhtml_branch_coverage=1 00:30:55.283 --rc genhtml_function_coverage=1 00:30:55.283 --rc genhtml_legend=1 00:30:55.283 --rc geninfo_all_blocks=1 00:30:55.283 --rc geninfo_unexecuted_blocks=1 00:30:55.283 00:30:55.283 ' 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=36a41355b5f9455cb4ee11c6c17dbf49 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:55.283 12:34:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:00.554 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:00.554 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:00.554 Found net devices under 0000:af:00.0: cvl_0_0 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:00.554 Found net devices under 0000:af:00.1: cvl_0_1 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:00.554 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:00.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:31:00.554 00:31:00.555 --- 10.0.0.2 ping statistics --- 00:31:00.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.555 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:31:00.555 00:31:00.555 --- 10.0.0.1 ping statistics --- 00:31:00.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.555 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3801046 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3801046 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3801046 ']' 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.555 12:34:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.813 [2024-12-10 12:34:07.412560] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:00.813 [2024-12-10 12:34:07.412662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.813 [2024-12-10 12:34:07.532158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.072 [2024-12-10 12:34:07.642623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.072 [2024-12-10 12:34:07.642666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.072 [2024-12-10 12:34:07.642677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.072 [2024-12-10 12:34:07.642700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.072 [2024-12-10 12:34:07.642708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.072 [2024-12-10 12:34:07.644184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 [2024-12-10 12:34:08.259323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 null0 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 36a41355b5f9455cb4ee11c6c17dbf49 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 [2024-12-10 12:34:08.299593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.640 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.899 nvme0n1 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.899 [ 00:31:01.899 { 00:31:01.899 "name": "nvme0n1", 00:31:01.899 "aliases": [ 00:31:01.899 "36a41355-b5f9-455c-b4ee-11c6c17dbf49" 00:31:01.899 ], 00:31:01.899 "product_name": "NVMe disk", 00:31:01.899 "block_size": 512, 00:31:01.899 "num_blocks": 2097152, 00:31:01.899 "uuid": "36a41355-b5f9-455c-b4ee-11c6c17dbf49", 00:31:01.899 "numa_id": 1, 00:31:01.899 "assigned_rate_limits": { 00:31:01.899 "rw_ios_per_sec": 0, 00:31:01.899 "rw_mbytes_per_sec": 0, 00:31:01.899 "r_mbytes_per_sec": 0, 00:31:01.899 "w_mbytes_per_sec": 0 00:31:01.899 }, 00:31:01.899 "claimed": false, 00:31:01.899 "zoned": false, 00:31:01.899 "supported_io_types": { 00:31:01.899 "read": true, 00:31:01.899 "write": true, 00:31:01.899 "unmap": false, 00:31:01.899 "flush": true, 00:31:01.899 "reset": true, 00:31:01.899 "nvme_admin": true, 00:31:01.899 "nvme_io": true, 00:31:01.899 "nvme_io_md": false, 00:31:01.899 "write_zeroes": true, 00:31:01.899 "zcopy": false, 00:31:01.899 "get_zone_info": false, 00:31:01.899 "zone_management": false, 00:31:01.899 "zone_append": false, 00:31:01.899 "compare": true, 00:31:01.899 "compare_and_write": true, 00:31:01.899 "abort": true, 00:31:01.899 "seek_hole": false, 00:31:01.899 "seek_data": false, 00:31:01.899 "copy": true, 00:31:01.899 "nvme_iov_md": false 00:31:01.899 }, 00:31:01.899 "memory_domains": [ 00:31:01.899 { 00:31:01.899 "dma_device_id": "system", 00:31:01.899 "dma_device_type": 1 00:31:01.899 } 00:31:01.899 ], 00:31:01.899 "driver_specific": { 00:31:01.899 "nvme": [ 00:31:01.899 { 00:31:01.899 "trid": { 00:31:01.899 "trtype": "TCP", 00:31:01.899 "adrfam": "IPv4", 00:31:01.899 "traddr": "10.0.0.2", 00:31:01.899 "trsvcid": "4420", 00:31:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:01.899 }, 00:31:01.899 "ctrlr_data": { 00:31:01.899 "cntlid": 1, 00:31:01.899 "vendor_id": "0x8086", 00:31:01.899 "model_number": "SPDK bdev Controller", 00:31:01.899 "serial_number": "00000000000000000000", 00:31:01.899 "firmware_revision": "25.01", 00:31:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.899 "oacs": { 00:31:01.899 "security": 0, 00:31:01.899 "format": 0, 00:31:01.899 "firmware": 0, 00:31:01.899 "ns_manage": 0 00:31:01.899 }, 00:31:01.899 "multi_ctrlr": true, 00:31:01.899 "ana_reporting": false 00:31:01.899 }, 00:31:01.899 "vs": { 00:31:01.899 "nvme_version": "1.3" 00:31:01.899 }, 00:31:01.899 "ns_data": { 00:31:01.899 "id": 1, 00:31:01.899 "can_share": true 00:31:01.899 } 00:31:01.899 } 00:31:01.899 ], 00:31:01.899 "mp_policy": "active_passive" 00:31:01.899 } 00:31:01.899 } 00:31:01.899 ] 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.899 [2024-12-10 12:34:08.549669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:01.899 [2024-12-10 12:34:08.549749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:31:01.899 [2024-12-10 12:34:08.682284] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.899 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.899 [ 00:31:01.899 { 00:31:01.899 "name": "nvme0n1", 00:31:01.899 "aliases": [ 00:31:01.899 "36a41355-b5f9-455c-b4ee-11c6c17dbf49" 00:31:01.899 ], 00:31:01.899 "product_name": "NVMe disk", 00:31:01.899 "block_size": 512, 00:31:01.899 "num_blocks": 2097152, 00:31:01.899 "uuid": "36a41355-b5f9-455c-b4ee-11c6c17dbf49", 00:31:01.899 "numa_id": 1, 00:31:01.899 "assigned_rate_limits": { 00:31:01.899 "rw_ios_per_sec": 0, 00:31:01.899 "rw_mbytes_per_sec": 0, 00:31:01.899 "r_mbytes_per_sec": 0, 00:31:01.899 "w_mbytes_per_sec": 0 00:31:01.899 }, 00:31:01.899 "claimed": false, 00:31:01.899 "zoned": false, 00:31:01.899 "supported_io_types": { 00:31:01.899 "read": true, 00:31:01.899 "write": true, 00:31:01.899 "unmap": false, 00:31:01.900 "flush": true, 00:31:01.900 "reset": true, 00:31:01.900 "nvme_admin": true, 00:31:01.900 "nvme_io": true, 00:31:01.900 "nvme_io_md": false, 00:31:01.900 "write_zeroes": true, 00:31:01.900 "zcopy": false, 00:31:01.900 "get_zone_info": false, 00:31:01.900 "zone_management": false, 00:31:01.900 "zone_append": false, 00:31:01.900 "compare": true, 00:31:01.900 "compare_and_write": true, 00:31:01.900 "abort": true, 00:31:01.900 "seek_hole": false, 00:31:01.900 "seek_data": false, 00:31:01.900 "copy": true, 00:31:01.900 "nvme_iov_md": false 00:31:01.900 }, 00:31:01.900 "memory_domains": [ 00:31:01.900 { 00:31:01.900 "dma_device_id": "system", 00:31:01.900 "dma_device_type": 1 00:31:01.900 } 00:31:01.900 ], 00:31:01.900 "driver_specific": { 00:31:01.900 "nvme": [ 00:31:01.900 { 00:31:01.900 "trid": { 00:31:01.900 "trtype": "TCP", 00:31:01.900 "adrfam": "IPv4", 00:31:01.900 "traddr": "10.0.0.2", 00:31:01.900 "trsvcid": "4420", 00:31:01.900 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:01.900 }, 00:31:01.900 "ctrlr_data": { 00:31:01.900 "cntlid": 2, 00:31:01.900 "vendor_id": "0x8086", 00:31:01.900 "model_number": "SPDK bdev Controller", 00:31:01.900 "serial_number": "00000000000000000000", 00:31:01.900 "firmware_revision": "25.01", 00:31:01.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.900 "oacs": { 00:31:01.900 "security": 0, 00:31:01.900 "format": 0, 00:31:01.900 "firmware": 0, 00:31:01.900 "ns_manage": 0 00:31:01.900 }, 00:31:01.900 "multi_ctrlr": true, 00:31:01.900 "ana_reporting": false 00:31:01.900 }, 00:31:01.900 "vs": { 00:31:01.900 "nvme_version": "1.3" 00:31:01.900 }, 00:31:01.900 "ns_data": { 00:31:01.900 "id": 1, 00:31:01.900 "can_share": true 00:31:01.900 } 00:31:01.900 } 00:31:01.900 ], 00:31:01.900 "mp_policy": "active_passive" 00:31:01.900 } 00:31:01.900 } 00:31:01.900 ] 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zMEyoV20uj 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zMEyoV20uj 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.zMEyoV20uj 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.900 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.159 [2024-12-10 12:34:08.742332] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:02.159 [2024-12-10 12:34:08.742478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.159 [2024-12-10 12:34:08.758381] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:02.159 nvme0n1 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.159 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.159 [ 00:31:02.159 { 00:31:02.159 "name": "nvme0n1", 00:31:02.159 "aliases": [ 00:31:02.159 "36a41355-b5f9-455c-b4ee-11c6c17dbf49" 00:31:02.159 ], 00:31:02.159 "product_name": "NVMe disk", 00:31:02.159 "block_size": 512, 00:31:02.159 "num_blocks": 2097152, 00:31:02.159 "uuid": "36a41355-b5f9-455c-b4ee-11c6c17dbf49", 00:31:02.159 "numa_id": 1, 00:31:02.159 "assigned_rate_limits": { 00:31:02.159 "rw_ios_per_sec": 0, 00:31:02.159 "rw_mbytes_per_sec": 0, 00:31:02.159 "r_mbytes_per_sec": 0, 00:31:02.159 "w_mbytes_per_sec": 0 00:31:02.159 }, 00:31:02.159 "claimed": false, 00:31:02.159 "zoned": false, 00:31:02.159 "supported_io_types": { 00:31:02.159 "read": true, 00:31:02.159 "write": true, 00:31:02.159 "unmap": false, 00:31:02.159 "flush": true, 00:31:02.159 "reset": true, 00:31:02.159 "nvme_admin": true, 00:31:02.159 "nvme_io": true, 00:31:02.159 "nvme_io_md": false, 00:31:02.159 "write_zeroes": true, 00:31:02.159 "zcopy": false, 00:31:02.159 "get_zone_info": false, 00:31:02.159 "zone_management": false, 00:31:02.159 "zone_append": false, 00:31:02.159 "compare": true, 00:31:02.159 "compare_and_write": true, 00:31:02.159 "abort": true, 00:31:02.160 "seek_hole": false, 00:31:02.160 "seek_data": false, 00:31:02.160 "copy": true, 00:31:02.160 "nvme_iov_md": false 00:31:02.160 }, 00:31:02.160 "memory_domains": [ 00:31:02.160 { 00:31:02.160 "dma_device_id": "system", 00:31:02.160 "dma_device_type": 1 00:31:02.160 } 00:31:02.160 ], 00:31:02.160 "driver_specific": { 00:31:02.160 "nvme": [ 00:31:02.160 { 00:31:02.160 "trid": { 00:31:02.160 "trtype": "TCP", 00:31:02.160 "adrfam": "IPv4", 00:31:02.160 "traddr": "10.0.0.2", 00:31:02.160 "trsvcid": "4421", 00:31:02.160 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:02.160 }, 00:31:02.160 "ctrlr_data": { 00:31:02.160 "cntlid": 3, 00:31:02.160 "vendor_id": "0x8086", 00:31:02.160 "model_number": "SPDK bdev Controller", 00:31:02.160 "serial_number": "00000000000000000000", 00:31:02.160 "firmware_revision": "25.01", 00:31:02.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.160 "oacs": { 00:31:02.160 "security": 0, 00:31:02.160 "format": 0, 00:31:02.160 "firmware": 0, 00:31:02.160 "ns_manage": 0 00:31:02.160 }, 00:31:02.160 "multi_ctrlr": true, 00:31:02.160 "ana_reporting": false 00:31:02.160 }, 00:31:02.160 "vs": { 00:31:02.160 "nvme_version": "1.3" 00:31:02.160 }, 00:31:02.160 "ns_data": { 00:31:02.160 "id": 1, 00:31:02.160 "can_share": true 00:31:02.160 } 00:31:02.160 } 00:31:02.160 ], 00:31:02.160 "mp_policy": "active_passive" 00:31:02.160 } 00:31:02.160 } 00:31:02.160 ] 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.zMEyoV20uj 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:02.160 rmmod nvme_tcp 00:31:02.160 rmmod nvme_fabrics 00:31:02.160 rmmod nvme_keyring 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3801046 ']' 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3801046 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3801046 ']' 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3801046 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3801046 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3801046' 00:31:02.160 killing process with pid 3801046 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3801046 00:31:02.160 12:34:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3801046 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.536 12:34:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.439 12:34:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:05.439 00:31:05.439 real 0m10.465s 00:31:05.439 user 0m4.504s 00:31:05.439 sys 0m4.509s 00:31:05.439 12:34:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.439 12:34:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.439 ************************************ 00:31:05.439 END TEST nvmf_async_init 00:31:05.439 ************************************ 00:31:05.439 12:34:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:05.439 12:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:05.439 12:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.439 12:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.439 ************************************ 00:31:05.439 START TEST dma 00:31:05.439 ************************************ 00:31:05.439 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:05.698 * Looking for test storage... 00:31:05.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:05.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.698 --rc genhtml_branch_coverage=1 00:31:05.698 --rc genhtml_function_coverage=1 00:31:05.698 --rc genhtml_legend=1 00:31:05.698 --rc geninfo_all_blocks=1 00:31:05.698 --rc geninfo_unexecuted_blocks=1 00:31:05.698 00:31:05.698 ' 00:31:05.698 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:05.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.698 --rc genhtml_branch_coverage=1 00:31:05.698 --rc genhtml_function_coverage=1 00:31:05.698 --rc genhtml_legend=1 00:31:05.698 --rc geninfo_all_blocks=1 00:31:05.698 --rc geninfo_unexecuted_blocks=1 00:31:05.698 00:31:05.699 ' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:05.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.699 --rc genhtml_branch_coverage=1 00:31:05.699 --rc genhtml_function_coverage=1 00:31:05.699 --rc genhtml_legend=1 00:31:05.699 --rc geninfo_all_blocks=1 00:31:05.699 --rc geninfo_unexecuted_blocks=1 00:31:05.699 00:31:05.699 ' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:05.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.699 --rc genhtml_branch_coverage=1 00:31:05.699 --rc genhtml_function_coverage=1 00:31:05.699 --rc genhtml_legend=1 00:31:05.699 --rc geninfo_all_blocks=1 00:31:05.699 --rc geninfo_unexecuted_blocks=1 00:31:05.699 00:31:05.699 ' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:05.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:05.699 00:31:05.699 real 0m0.195s 00:31:05.699 user 0m0.113s 00:31:05.699 sys 0m0.096s 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:05.699 ************************************ 00:31:05.699 END TEST dma 00:31:05.699 ************************************ 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.699 ************************************ 00:31:05.699 START TEST nvmf_identify 00:31:05.699 ************************************ 00:31:05.699 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:05.959 * Looking for test storage... 00:31:05.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.959 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:05.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.960 --rc genhtml_branch_coverage=1 00:31:05.960 --rc genhtml_function_coverage=1 00:31:05.960 --rc genhtml_legend=1 00:31:05.960 --rc geninfo_all_blocks=1 00:31:05.960 --rc geninfo_unexecuted_blocks=1 00:31:05.960 00:31:05.960 ' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:05.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.960 --rc genhtml_branch_coverage=1 00:31:05.960 --rc genhtml_function_coverage=1 00:31:05.960 --rc genhtml_legend=1 00:31:05.960 --rc geninfo_all_blocks=1 00:31:05.960 --rc geninfo_unexecuted_blocks=1 00:31:05.960 00:31:05.960 ' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:05.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.960 --rc genhtml_branch_coverage=1 00:31:05.960 --rc genhtml_function_coverage=1 00:31:05.960 --rc genhtml_legend=1 00:31:05.960 --rc geninfo_all_blocks=1 00:31:05.960 --rc geninfo_unexecuted_blocks=1 00:31:05.960 00:31:05.960 ' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:05.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.960 --rc genhtml_branch_coverage=1 00:31:05.960 --rc genhtml_function_coverage=1 00:31:05.960 --rc genhtml_legend=1 00:31:05.960 --rc geninfo_all_blocks=1 00:31:05.960 --rc geninfo_unexecuted_blocks=1 00:31:05.960 00:31:05.960 ' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.960 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:05.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:05.961 12:34:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:11.229 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:11.229 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.229 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:11.230 Found net devices under 0000:af:00.0: cvl_0_0 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:11.230 Found net devices under 0000:af:00.1: cvl_0_1 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.230 12:34:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:11.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:31:11.489 00:31:11.489 --- 10.0.0.2 ping statistics --- 00:31:11.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.489 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:11.489 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:31:11.489 00:31:11.490 --- 10.0.0.1 ping statistics --- 00:31:11.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.490 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3804931 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3804931 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3804931 ']' 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:11.490 12:34:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:11.749 [2024-12-10 12:34:18.370383] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:11.749 [2024-12-10 12:34:18.370473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.749 [2024-12-10 12:34:18.487123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.008 [2024-12-10 12:34:18.598280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.008 [2024-12-10 12:34:18.598324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.008 [2024-12-10 12:34:18.598335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.008 [2024-12-10 12:34:18.598345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.008 [2024-12-10 12:34:18.598353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.008 [2024-12-10 12:34:18.600704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.008 [2024-12-10 12:34:18.600779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.008 [2024-12-10 12:34:18.600842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.008 [2024-12-10 12:34:18.600852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.574 [2024-12-10 12:34:19.182525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.574 Malloc0 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.574 [2024-12-10 12:34:19.336363] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.574 [ 00:31:12.574 { 00:31:12.574 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:12.574 "subtype": "Discovery", 00:31:12.574 "listen_addresses": [ 00:31:12.574 { 00:31:12.574 "trtype": "TCP", 00:31:12.574 "adrfam": "IPv4", 00:31:12.574 "traddr": "10.0.0.2", 00:31:12.574 "trsvcid": "4420" 00:31:12.574 } 00:31:12.574 ], 00:31:12.574 "allow_any_host": true, 00:31:12.574 "hosts": [] 00:31:12.574 }, 00:31:12.574 { 00:31:12.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.574 "subtype": "NVMe", 00:31:12.574 "listen_addresses": [ 00:31:12.574 { 00:31:12.574 "trtype": "TCP", 00:31:12.574 "adrfam": "IPv4", 00:31:12.574 "traddr": "10.0.0.2", 00:31:12.574 "trsvcid": "4420" 00:31:12.574 } 00:31:12.574 ], 00:31:12.574 "allow_any_host": true, 00:31:12.574 "hosts": [], 00:31:12.574 "serial_number": "SPDK00000000000001", 00:31:12.574 "model_number": "SPDK bdev Controller", 00:31:12.574 "max_namespaces": 32, 00:31:12.574 "min_cntlid": 1, 00:31:12.574 "max_cntlid": 65519, 00:31:12.574 "namespaces": [ 00:31:12.574 { 00:31:12.574 "nsid": 1, 00:31:12.574 "bdev_name": "Malloc0", 00:31:12.574 "name": "Malloc0", 00:31:12.574 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:12.574 "eui64": "ABCDEF0123456789", 00:31:12.574 "uuid": "ea9c4f2f-7465-4605-8841-f9b8592597c8" 00:31:12.574 } 00:31:12.574 ] 00:31:12.574 } 00:31:12.574 ] 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.574 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:12.834 [2024-12-10 12:34:19.409685] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:12.834 [2024-12-10 12:34:19.409751] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805175 ] 00:31:12.834 [2024-12-10 12:34:19.471786] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:12.834 [2024-12-10 12:34:19.471882] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:12.834 [2024-12-10 12:34:19.471892] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:12.834 [2024-12-10 12:34:19.471912] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:12.834 [2024-12-10 12:34:19.471926] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:12.834 [2024-12-10 12:34:19.472533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:12.834 [2024-12-10 12:34:19.472579] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:12.834 [2024-12-10 12:34:19.479181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:12.834 [2024-12-10 12:34:19.479207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:12.834 [2024-12-10 12:34:19.479218] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:12.834 [2024-12-10 12:34:19.479224] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:12.834 [2024-12-10 12:34:19.479279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.479288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.479296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.834 [2024-12-10 12:34:19.479318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:12.834 [2024-12-10 12:34:19.479343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.834 [2024-12-10 12:34:19.486181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.834 [2024-12-10 12:34:19.486219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.834 [2024-12-10 12:34:19.486225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.486233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.834 [2024-12-10 12:34:19.486250] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:12.834 [2024-12-10 12:34:19.486267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:12.834 [2024-12-10 12:34:19.486278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:12.834 [2024-12-10 12:34:19.486305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.486312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.486318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.834 [2024-12-10 12:34:19.486333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.834 [2024-12-10 12:34:19.486355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.834 [2024-12-10 12:34:19.486590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.834 [2024-12-10 12:34:19.486599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.834 [2024-12-10 12:34:19.486605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.486611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.834 [2024-12-10 12:34:19.486627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:12.834 [2024-12-10 12:34:19.486640] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:12.834 [2024-12-10 12:34:19.486650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.486658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.486664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.834 [2024-12-10 12:34:19.486681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.834 [2024-12-10 12:34:19.486697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.834 [2024-12-10 12:34:19.486785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.834 [2024-12-10 12:34:19.486794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.834 [2024-12-10 12:34:19.486799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.834 [2024-12-10 12:34:19.486804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.834 [2024-12-10 12:34:19.486812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:12.834 [2024-12-10 12:34:19.486825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:12.834 [2024-12-10 12:34:19.486838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.486844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.486854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.835 [2024-12-10 12:34:19.486864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.835 [2024-12-10 12:34:19.486878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.835 [2024-12-10 12:34:19.486968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.835 [2024-12-10 12:34:19.486977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.835 [2024-12-10 12:34:19.486982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.486987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.835 [2024-12-10 12:34:19.486997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:12.835 [2024-12-10 12:34:19.487010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.835 [2024-12-10 12:34:19.487032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.835 [2024-12-10 12:34:19.487048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.835 [2024-12-10 12:34:19.487120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.835 [2024-12-10 12:34:19.487128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.835 [2024-12-10 12:34:19.487133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.835 [2024-12-10 12:34:19.487145] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:12.835 [2024-12-10 12:34:19.487153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:12.835 [2024-12-10 12:34:19.487171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:12.835 [2024-12-10 12:34:19.487280] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:12.835 [2024-12-10 12:34:19.487286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:12.835 [2024-12-10 12:34:19.487303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.835 [2024-12-10 12:34:19.487328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.835 [2024-12-10 12:34:19.487346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.835 [2024-12-10 12:34:19.487468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.835 [2024-12-10 12:34:19.487477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.835 [2024-12-10 12:34:19.487481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.835 [2024-12-10 12:34:19.487494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:12.835 [2024-12-10 12:34:19.487508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.835 [2024-12-10 12:34:19.487533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.835 [2024-12-10 12:34:19.487546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.835 [2024-12-10 12:34:19.487625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.835 [2024-12-10 12:34:19.487636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.835 [2024-12-10 12:34:19.487640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.835 [2024-12-10 12:34:19.487653] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:12.835 [2024-12-10 12:34:19.487660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:12.835 [2024-12-10 12:34:19.487672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:12.835 [2024-12-10 12:34:19.487688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:12.835 [2024-12-10 12:34:19.487702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.835 [2024-12-10 12:34:19.487719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.835 [2024-12-10 12:34:19.487734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.835 [2024-12-10 12:34:19.487892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:12.835 [2024-12-10 12:34:19.487900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:12.835 [2024-12-10 12:34:19.487905] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487911] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:12.835 [2024-12-10 12:34:19.487918] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:12.835 [2024-12-10 12:34:19.487924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487937] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487944] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.835 [2024-12-10 12:34:19.487967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.835 [2024-12-10 12:34:19.487972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.487977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.835 [2024-12-10 12:34:19.487991] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:12.835 [2024-12-10 12:34:19.487998] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:12.835 [2024-12-10 12:34:19.488005] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:12.835 [2024-12-10 12:34:19.488016] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:12.835 [2024-12-10 12:34:19.488023] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:12.835 [2024-12-10 12:34:19.488030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:12.835 [2024-12-10 12:34:19.488044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:12.835 [2024-12-10 12:34:19.488057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.835 [2024-12-10 12:34:19.488086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:12.835 [2024-12-10 12:34:19.488102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.835 [2024-12-10 12:34:19.488224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.835 [2024-12-10 12:34:19.488233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.835 [2024-12-10 12:34:19.488238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.835 [2024-12-10 12:34:19.488254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:12.835 [2024-12-10 12:34:19.488280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.835 [2024-12-10 12:34:19.488288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:12.835 [2024-12-10 12:34:19.488311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.835 [2024-12-10 12:34:19.488319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.835 [2024-12-10 12:34:19.488329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:12.836 [2024-12-10 12:34:19.488337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.836 [2024-12-10 12:34:19.488344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.488350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.488354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:12.836 [2024-12-10 12:34:19.488362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.836 [2024-12-10 12:34:19.488369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:12.836 [2024-12-10 12:34:19.488385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:12.836 [2024-12-10 12:34:19.488394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.488403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:12.836 [2024-12-10 12:34:19.488413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.836 [2024-12-10 12:34:19.488429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:12.836 [2024-12-10 12:34:19.488436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:12.836 [2024-12-10 12:34:19.488442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:12.836 [2024-12-10 12:34:19.488448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:12.836 [2024-12-10 12:34:19.488454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:12.836 [2024-12-10 12:34:19.488574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.836 [2024-12-10 12:34:19.488583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.836 [2024-12-10 12:34:19.488588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.488595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:12.836 [2024-12-10 12:34:19.488603] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:12.836 [2024-12-10 12:34:19.488611] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:12.836 [2024-12-10 12:34:19.488629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.488635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:12.836 [2024-12-10 12:34:19.488645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.836 [2024-12-10 12:34:19.488660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:12.836 [2024-12-10 12:34:19.488751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:12.836 [2024-12-10 12:34:19.488765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:12.836 [2024-12-10 12:34:19.488773] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.488779] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:12.836 [2024-12-10 12:34:19.488785] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:12.836 [2024-12-10 12:34:19.488792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.488808] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.488816] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.836 [2024-12-10 12:34:19.531201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.836 [2024-12-10 12:34:19.531206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:12.836 [2024-12-10 12:34:19.531237] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:12.836 [2024-12-10 12:34:19.531283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:12.836 [2024-12-10 12:34:19.531306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.836 [2024-12-10 12:34:19.531316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:12.836 [2024-12-10 12:34:19.531337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.836 [2024-12-10 12:34:19.531357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:12.836 [2024-12-10 12:34:19.531365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:12.836 [2024-12-10 12:34:19.531642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:12.836 [2024-12-10 12:34:19.531654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:12.836 [2024-12-10 12:34:19.531660] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531669] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=1024, cccid=4 00:31:12.836 [2024-12-10 12:34:19.531675] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=1024 00:31:12.836 [2024-12-10 12:34:19.531682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531692] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531698] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.836 [2024-12-10 12:34:19.531713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.836 [2024-12-10 12:34:19.531718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.531724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:12.836 [2024-12-10 12:34:19.573331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.836 [2024-12-10 12:34:19.573351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.836 [2024-12-10 12:34:19.573356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:12.836 [2024-12-10 12:34:19.573398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:12.836 [2024-12-10 12:34:19.573418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.836 [2024-12-10 12:34:19.573444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:12.836 [2024-12-10 12:34:19.573604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:12.836 [2024-12-10 12:34:19.573614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:12.836 [2024-12-10 12:34:19.573618] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573624] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=3072, cccid=4 00:31:12.836 [2024-12-10 12:34:19.573630] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=3072 00:31:12.836 [2024-12-10 12:34:19.573637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573646] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573651] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.836 [2024-12-10 12:34:19.573673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.836 [2024-12-10 12:34:19.573678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:12.836 [2024-12-10 12:34:19.573699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:12.836 [2024-12-10 12:34:19.573717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.836 [2024-12-10 12:34:19.573737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:12.836 [2024-12-10 12:34:19.573847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:12.836 [2024-12-10 12:34:19.573859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:12.836 [2024-12-10 12:34:19.573864] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573869] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8, cccid=4 00:31:12.836 [2024-12-10 12:34:19.573875] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=8 00:31:12.836 [2024-12-10 12:34:19.573881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573890] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.573895] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:12.836 [2024-12-10 12:34:19.614369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.836 [2024-12-10 12:34:19.614388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.836 [2024-12-10 12:34:19.614393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.837 [2024-12-10 12:34:19.614399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:12.837 ===================================================== 00:31:12.837 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:12.837 ===================================================== 00:31:12.837 Controller Capabilities/Features 00:31:12.837 ================================ 00:31:12.837 Vendor ID: 0000 00:31:12.837 Subsystem Vendor ID: 0000 00:31:12.837 Serial Number: .................... 00:31:12.837 Model Number: ........................................ 00:31:12.837 Firmware Version: 25.01 00:31:12.837 Recommended Arb Burst: 0 00:31:12.837 IEEE OUI Identifier: 00 00 00 00:31:12.837 Multi-path I/O 00:31:12.837 May have multiple subsystem ports: No 00:31:12.837 May have multiple controllers: No 00:31:12.837 Associated with SR-IOV VF: No 00:31:12.837 Max Data Transfer Size: 131072 00:31:12.837 Max Number of Namespaces: 0 00:31:12.837 Max Number of I/O Queues: 1024 00:31:12.837 NVMe Specification Version (VS): 1.3 00:31:12.837 NVMe Specification Version (Identify): 1.3 00:31:12.837 Maximum Queue Entries: 128 00:31:12.837 Contiguous Queues Required: Yes 00:31:12.837 Arbitration Mechanisms Supported 00:31:12.837 Weighted Round Robin: Not Supported 00:31:12.837 Vendor Specific: Not Supported 00:31:12.837 Reset Timeout: 15000 ms 00:31:12.837 Doorbell Stride: 4 bytes 00:31:12.837 NVM Subsystem Reset: Not Supported 00:31:12.837 Command Sets Supported 00:31:12.837 NVM Command Set: Supported 00:31:12.837 Boot Partition: Not Supported 00:31:12.837 Memory Page Size Minimum: 4096 bytes 00:31:12.837 Memory Page Size Maximum: 4096 bytes 00:31:12.837 Persistent Memory Region: Not Supported 00:31:12.837 Optional Asynchronous Events Supported 00:31:12.837 Namespace Attribute Notices: Not Supported 00:31:12.837 Firmware Activation Notices: Not Supported 00:31:12.837 ANA Change Notices: Not Supported 00:31:12.837 PLE Aggregate Log Change Notices: Not Supported 00:31:12.837 LBA Status Info Alert Notices: Not Supported 00:31:12.837 EGE Aggregate Log Change Notices: Not Supported 00:31:12.837 Normal NVM Subsystem Shutdown event: Not Supported 00:31:12.837 Zone Descriptor Change Notices: Not Supported 00:31:12.837 Discovery Log Change Notices: Supported 00:31:12.837 Controller Attributes 00:31:12.837 128-bit Host Identifier: Not Supported 00:31:12.837 Non-Operational Permissive Mode: Not Supported 00:31:12.837 NVM Sets: Not Supported 00:31:12.837 Read Recovery Levels: Not Supported 00:31:12.837 Endurance Groups: Not Supported 00:31:12.837 Predictable Latency Mode: Not Supported 00:31:12.837 Traffic Based Keep ALive: Not Supported 00:31:12.837 Namespace Granularity: Not Supported 00:31:12.837 SQ Associations: Not Supported 00:31:12.837 UUID List: Not Supported 00:31:12.837 Multi-Domain Subsystem: Not Supported 00:31:12.837 Fixed Capacity Management: Not Supported 00:31:12.837 Variable Capacity Management: Not Supported 00:31:12.837 Delete Endurance Group: Not Supported 00:31:12.837 Delete NVM Set: Not Supported 00:31:12.837 Extended LBA Formats Supported: Not Supported 00:31:12.837 Flexible Data Placement Supported: Not Supported 00:31:12.837 00:31:12.837 Controller Memory Buffer Support 00:31:12.837 ================================ 00:31:12.837 Supported: No 00:31:12.837 00:31:12.837 Persistent Memory Region Support 00:31:12.837 ================================ 00:31:12.837 Supported: No 00:31:12.837 00:31:12.837 Admin Command Set Attributes 00:31:12.837 ============================ 00:31:12.837 Security Send/Receive: Not Supported 00:31:12.837 Format NVM: Not Supported 00:31:12.837 Firmware Activate/Download: Not Supported 00:31:12.837 Namespace Management: Not Supported 00:31:12.837 Device Self-Test: Not Supported 00:31:12.837 Directives: Not Supported 00:31:12.837 NVMe-MI: Not Supported 00:31:12.837 Virtualization Management: Not Supported 00:31:12.837 Doorbell Buffer Config: Not Supported 00:31:12.837 Get LBA Status Capability: Not Supported 00:31:12.837 Command & Feature Lockdown Capability: Not Supported 00:31:12.837 Abort Command Limit: 1 00:31:12.837 Async Event Request Limit: 4 00:31:12.837 Number of Firmware Slots: N/A 00:31:12.837 Firmware Slot 1 Read-Only: N/A 00:31:12.837 Firmware Activation Without Reset: N/A 00:31:12.837 Multiple Update Detection Support: N/A 00:31:12.837 Firmware Update Granularity: No Information Provided 00:31:12.837 Per-Namespace SMART Log: No 00:31:12.837 Asymmetric Namespace Access Log Page: Not Supported 00:31:12.837 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:12.837 Command Effects Log Page: Not Supported 00:31:12.837 Get Log Page Extended Data: Supported 00:31:12.837 Telemetry Log Pages: Not Supported 00:31:12.837 Persistent Event Log Pages: Not Supported 00:31:12.837 Supported Log Pages Log Page: May Support 00:31:12.837 Commands Supported & Effects Log Page: Not Supported 00:31:12.837 Feature Identifiers & Effects Log Page:May Support 00:31:12.837 NVMe-MI Commands & Effects Log Page: May Support 00:31:12.837 Data Area 4 for Telemetry Log: Not Supported 00:31:12.837 Error Log Page Entries Supported: 128 00:31:12.837 Keep Alive: Not Supported 00:31:12.837 00:31:12.837 NVM Command Set Attributes 00:31:12.837 ========================== 00:31:12.837 Submission Queue Entry Size 00:31:12.837 Max: 1 00:31:12.837 Min: 1 00:31:12.837 Completion Queue Entry Size 00:31:12.837 Max: 1 00:31:12.837 Min: 1 00:31:12.837 Number of Namespaces: 0 00:31:12.837 Compare Command: Not Supported 00:31:12.837 Write Uncorrectable Command: Not Supported 00:31:12.837 Dataset Management Command: Not Supported 00:31:12.837 Write Zeroes Command: Not Supported 00:31:12.837 Set Features Save Field: Not Supported 00:31:12.837 Reservations: Not Supported 00:31:12.837 Timestamp: Not Supported 00:31:12.837 Copy: Not Supported 00:31:12.837 Volatile Write Cache: Not Present 00:31:12.837 Atomic Write Unit (Normal): 1 00:31:12.837 Atomic Write Unit (PFail): 1 00:31:12.837 Atomic Compare & Write Unit: 1 00:31:12.837 Fused Compare & Write: Supported 00:31:12.837 Scatter-Gather List 00:31:12.837 SGL Command Set: Supported 00:31:12.837 SGL Keyed: Supported 00:31:12.837 SGL Bit Bucket Descriptor: Not Supported 00:31:12.837 SGL Metadata Pointer: Not Supported 00:31:12.837 Oversized SGL: Not Supported 00:31:12.837 SGL Metadata Address: Not Supported 00:31:12.837 SGL Offset: Supported 00:31:12.837 Transport SGL Data Block: Not Supported 00:31:12.837 Replay Protected Memory Block: Not Supported 00:31:12.837 00:31:12.837 Firmware Slot Information 00:31:12.837 ========================= 00:31:12.837 Active slot: 0 00:31:12.837 00:31:12.837 00:31:12.837 Error Log 00:31:12.837 ========= 00:31:12.837 00:31:12.837 Active Namespaces 00:31:12.837 ================= 00:31:12.837 Discovery Log Page 00:31:12.837 ================== 00:31:12.837 Generation Counter: 2 00:31:12.837 Number of Records: 2 00:31:12.837 Record Format: 0 00:31:12.837 00:31:12.837 Discovery Log Entry 0 00:31:12.837 ---------------------- 00:31:12.837 Transport Type: 3 (TCP) 00:31:12.837 Address Family: 1 (IPv4) 00:31:12.837 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:12.837 Entry Flags: 00:31:12.837 Duplicate Returned Information: 1 00:31:12.837 Explicit Persistent Connection Support for Discovery: 1 00:31:12.837 Transport Requirements: 00:31:12.837 Secure Channel: Not Required 00:31:12.837 Port ID: 0 (0x0000) 00:31:12.837 Controller ID: 65535 (0xffff) 00:31:12.837 Admin Max SQ Size: 128 00:31:12.837 Transport Service Identifier: 4420 00:31:12.838 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:12.838 Transport Address: 10.0.0.2 00:31:12.838 Discovery Log Entry 1 00:31:12.838 ---------------------- 00:31:12.838 Transport Type: 3 (TCP) 00:31:12.838 Address Family: 1 (IPv4) 00:31:12.838 Subsystem Type: 2 (NVM Subsystem) 00:31:12.838 Entry Flags: 00:31:12.838 Duplicate Returned Information: 0 00:31:12.838 Explicit Persistent Connection Support for Discovery: 0 00:31:12.838 Transport Requirements: 00:31:12.838 Secure Channel: Not Required 00:31:12.838 Port ID: 0 (0x0000) 00:31:12.838 Controller ID: 65535 (0xffff) 00:31:12.838 Admin Max SQ Size: 128 00:31:12.838 Transport Service Identifier: 4420 00:31:12.838 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:12.838 Transport Address: 10.0.0.2 [2024-12-10 12:34:19.614522] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:12.838 [2024-12-10 12:34:19.614536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:12.838 [2024-12-10 12:34:19.614549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.838 [2024-12-10 12:34:19.614556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:12.838 [2024-12-10 12:34:19.614564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.838 [2024-12-10 12:34:19.614570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:12.838 [2024-12-10 12:34:19.614577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.838 [2024-12-10 12:34:19.614584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:12.838 [2024-12-10 12:34:19.614591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.838 [2024-12-10 12:34:19.614602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.614609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.614615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:12.838 [2024-12-10 12:34:19.614630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.838 [2024-12-10 12:34:19.614650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:12.838 [2024-12-10 12:34:19.614773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.838 [2024-12-10 12:34:19.614782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.838 [2024-12-10 12:34:19.614787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.614793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:12.838 [2024-12-10 12:34:19.614806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.614813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.614819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:12.838 [2024-12-10 12:34:19.614834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.838 [2024-12-10 12:34:19.614854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:12.838 [2024-12-10 12:34:19.614955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.838 [2024-12-10 12:34:19.614964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.838 [2024-12-10 12:34:19.614968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.614973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:12.838 [2024-12-10 12:34:19.614981] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:12.838 [2024-12-10 12:34:19.614988] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:12.838 [2024-12-10 12:34:19.615001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.615008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.615013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:12.838 [2024-12-10 12:34:19.615023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.838 [2024-12-10 12:34:19.615037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:12.838 [2024-12-10 12:34:19.615122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.838 [2024-12-10 12:34:19.615130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.838 [2024-12-10 12:34:19.615137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.615142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:12.838 [2024-12-10 12:34:19.615155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.615160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.619174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:12.838 [2024-12-10 12:34:19.619194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.838 [2024-12-10 12:34:19.619213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:12.838 [2024-12-10 12:34:19.619438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:12.838 [2024-12-10 12:34:19.619447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:12.838 [2024-12-10 12:34:19.619451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:12.838 [2024-12-10 12:34:19.619457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:12.838 [2024-12-10 12:34:19.619468] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:31:12.838 00:31:13.097 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:13.097 [2024-12-10 12:34:19.715341] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:13.097 [2024-12-10 12:34:19.715406] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805182 ] 00:31:13.097 [2024-12-10 12:34:19.776615] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:13.097 [2024-12-10 12:34:19.776723] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:13.097 [2024-12-10 12:34:19.776736] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:13.097 [2024-12-10 12:34:19.776756] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:13.097 [2024-12-10 12:34:19.776769] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:13.097 [2024-12-10 12:34:19.777348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:13.097 [2024-12-10 12:34:19.777387] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:13.097 [2024-12-10 12:34:19.783183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:13.097 [2024-12-10 12:34:19.783205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:13.097 [2024-12-10 12:34:19.783215] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:13.097 [2024-12-10 12:34:19.783221] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:13.097 [2024-12-10 12:34:19.783270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.783279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.783288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.097 [2024-12-10 12:34:19.783307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:13.097 [2024-12-10 12:34:19.783334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.097 [2024-12-10 12:34:19.791181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.097 [2024-12-10 12:34:19.791203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.097 [2024-12-10 12:34:19.791209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.097 [2024-12-10 12:34:19.791235] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:13.097 [2024-12-10 12:34:19.791249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:13.097 [2024-12-10 12:34:19.791258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:13.097 [2024-12-10 12:34:19.791275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.097 [2024-12-10 12:34:19.791304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.097 [2024-12-10 12:34:19.791324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.097 [2024-12-10 12:34:19.791506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.097 [2024-12-10 12:34:19.791517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.097 [2024-12-10 12:34:19.791523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.097 [2024-12-10 12:34:19.791544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:13.097 [2024-12-10 12:34:19.791556] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:13.097 [2024-12-10 12:34:19.791566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.097 [2024-12-10 12:34:19.791591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.097 [2024-12-10 12:34:19.791607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.097 [2024-12-10 12:34:19.791716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.097 [2024-12-10 12:34:19.791724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.097 [2024-12-10 12:34:19.791730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.097 [2024-12-10 12:34:19.791744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:13.097 [2024-12-10 12:34:19.791755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:13.097 [2024-12-10 12:34:19.791764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.097 [2024-12-10 12:34:19.791789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.097 [2024-12-10 12:34:19.791806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.097 [2024-12-10 12:34:19.791877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.097 [2024-12-10 12:34:19.791886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.097 [2024-12-10 12:34:19.791891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.097 [2024-12-10 12:34:19.791903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:13.097 [2024-12-10 12:34:19.791916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.791928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.097 [2024-12-10 12:34:19.791940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.097 [2024-12-10 12:34:19.791955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.097 [2024-12-10 12:34:19.792027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.097 [2024-12-10 12:34:19.792035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.097 [2024-12-10 12:34:19.792043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.097 [2024-12-10 12:34:19.792049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.097 [2024-12-10 12:34:19.792056] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:13.097 [2024-12-10 12:34:19.792066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:13.097 [2024-12-10 12:34:19.792077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:13.098 [2024-12-10 12:34:19.792185] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:13.098 [2024-12-10 12:34:19.792192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:13.098 [2024-12-10 12:34:19.792208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.792230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.098 [2024-12-10 12:34:19.792245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.098 [2024-12-10 12:34:19.792377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.098 [2024-12-10 12:34:19.792385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.098 [2024-12-10 12:34:19.792390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.098 [2024-12-10 12:34:19.792402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:13.098 [2024-12-10 12:34:19.792418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.792442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.098 [2024-12-10 12:34:19.792457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.098 [2024-12-10 12:34:19.792567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.098 [2024-12-10 12:34:19.792580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.098 [2024-12-10 12:34:19.792586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.098 [2024-12-10 12:34:19.792598] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:13.098 [2024-12-10 12:34:19.792605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.792616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:13.098 [2024-12-10 12:34:19.792629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.792645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.792664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.098 [2024-12-10 12:34:19.792678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.098 [2024-12-10 12:34:19.792797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.098 [2024-12-10 12:34:19.792806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.098 [2024-12-10 12:34:19.792810] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792817] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:13.098 [2024-12-10 12:34:19.792823] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.098 [2024-12-10 12:34:19.792830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792848] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.098 [2024-12-10 12:34:19.792932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.098 [2024-12-10 12:34:19.792937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.792942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.098 [2024-12-10 12:34:19.792958] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:13.098 [2024-12-10 12:34:19.792965] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:13.098 [2024-12-10 12:34:19.792974] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:13.098 [2024-12-10 12:34:19.792983] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:13.098 [2024-12-10 12:34:19.792989] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:13.098 [2024-12-10 12:34:19.792997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.793052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:13.098 [2024-12-10 12:34:19.793068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.098 [2024-12-10 12:34:19.793145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.098 [2024-12-10 12:34:19.793153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.098 [2024-12-10 12:34:19.793157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.098 [2024-12-10 12:34:19.793180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.793204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.098 [2024-12-10 12:34:19.793212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.793232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.098 [2024-12-10 12:34:19.793240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.793258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.098 [2024-12-10 12:34:19.793265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.793283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.098 [2024-12-10 12:34:19.793289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.793329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.098 [2024-12-10 12:34:19.793345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.098 [2024-12-10 12:34:19.793352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:13.098 [2024-12-10 12:34:19.793360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:13.098 [2024-12-10 12:34:19.793365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.098 [2024-12-10 12:34:19.793372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.098 [2024-12-10 12:34:19.793499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.098 [2024-12-10 12:34:19.793508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.098 [2024-12-10 12:34:19.793512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.098 [2024-12-10 12:34:19.793530] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:13.098 [2024-12-10 12:34:19.793540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.793590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:13.098 [2024-12-10 12:34:19.793604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.098 [2024-12-10 12:34:19.793694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.098 [2024-12-10 12:34:19.793702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.098 [2024-12-10 12:34:19.793707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.098 [2024-12-10 12:34:19.793782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.793817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.793836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.098 [2024-12-10 12:34:19.793850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.098 [2024-12-10 12:34:19.793946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.098 [2024-12-10 12:34:19.793957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.098 [2024-12-10 12:34:19.793961] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793967] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:13.098 [2024-12-10 12:34:19.793973] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.098 [2024-12-10 12:34:19.793978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.793996] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.794002] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.839190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.098 [2024-12-10 12:34:19.839214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.098 [2024-12-10 12:34:19.839220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.839226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.098 [2024-12-10 12:34:19.839252] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:13.098 [2024-12-10 12:34:19.839270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.839284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.839297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.839304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.839316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.098 [2024-12-10 12:34:19.839335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.098 [2024-12-10 12:34:19.839548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.098 [2024-12-10 12:34:19.839557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.098 [2024-12-10 12:34:19.839562] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.839568] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:13.098 [2024-12-10 12:34:19.839574] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.098 [2024-12-10 12:34:19.839585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.839600] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.839606] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.881333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.098 [2024-12-10 12:34:19.881352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.098 [2024-12-10 12:34:19.881357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.881363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.098 [2024-12-10 12:34:19.881389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.881407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:13.098 [2024-12-10 12:34:19.881425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.881432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.098 [2024-12-10 12:34:19.881444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.098 [2024-12-10 12:34:19.881462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.098 [2024-12-10 12:34:19.881558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.098 [2024-12-10 12:34:19.881566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.098 [2024-12-10 12:34:19.881571] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.881579] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:13.098 [2024-12-10 12:34:19.881586] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.098 [2024-12-10 12:34:19.881591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.881606] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.098 [2024-12-10 12:34:19.881612] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.359 [2024-12-10 12:34:19.925206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.359 [2024-12-10 12:34:19.925211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.359 [2024-12-10 12:34:19.925239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:13.359 [2024-12-10 12:34:19.925253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:13.359 [2024-12-10 12:34:19.925264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:13.359 [2024-12-10 12:34:19.925272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:13.359 [2024-12-10 12:34:19.925279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:13.359 [2024-12-10 12:34:19.925287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:13.359 [2024-12-10 12:34:19.925294] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:13.359 [2024-12-10 12:34:19.925303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:13.359 [2024-12-10 12:34:19.925310] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:13.359 [2024-12-10 12:34:19.925341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.359 [2024-12-10 12:34:19.925363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-12-10 12:34:19.925372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.359 [2024-12-10 12:34:19.925393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.359 [2024-12-10 12:34:19.925414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.359 [2024-12-10 12:34:19.925422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.359 [2024-12-10 12:34:19.925545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.359 [2024-12-10 12:34:19.925559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.359 [2024-12-10 12:34:19.925565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.359 [2024-12-10 12:34:19.925580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.359 [2024-12-10 12:34:19.925590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.359 [2024-12-10 12:34:19.925595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.359 [2024-12-10 12:34:19.925613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.359 [2024-12-10 12:34:19.925629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-12-10 12:34:19.925643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.359 [2024-12-10 12:34:19.925722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.359 [2024-12-10 12:34:19.925730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.359 [2024-12-10 12:34:19.925735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.359 [2024-12-10 12:34:19.925752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.359 [2024-12-10 12:34:19.925767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-12-10 12:34:19.925781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.359 [2024-12-10 12:34:19.925913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.359 [2024-12-10 12:34:19.925920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.359 [2024-12-10 12:34:19.925925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.359 [2024-12-10 12:34:19.925942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.925948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.359 [2024-12-10 12:34:19.925958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-12-10 12:34:19.925970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.359 [2024-12-10 12:34:19.926045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.359 [2024-12-10 12:34:19.926053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.359 [2024-12-10 12:34:19.926058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.926063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.359 [2024-12-10 12:34:19.926087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.359 [2024-12-10 12:34:19.926094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.359 [2024-12-10 12:34:19.926105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-12-10 12:34:19.926115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.360 [2024-12-10 12:34:19.926130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-12-10 12:34:19.926142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500001db80) 00:31:13.360 [2024-12-10 12:34:19.926160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-12-10 12:34:19.926182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:13.360 [2024-12-10 12:34:19.926199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-12-10 12:34:19.926215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.360 [2024-12-10 12:34:19.926224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.360 [2024-12-10 12:34:19.926230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:13.360 [2024-12-10 12:34:19.926236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:13.360 [2024-12-10 12:34:19.926406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.360 [2024-12-10 12:34:19.926416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.360 [2024-12-10 12:34:19.926421] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926427] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8192, cccid=5 00:31:13.360 [2024-12-10 12:34:19.926433] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500001db80): expected_datao=0, payload_size=8192 00:31:13.360 [2024-12-10 12:34:19.926440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926491] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926497] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.360 [2024-12-10 12:34:19.926516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.360 [2024-12-10 12:34:19.926521] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926526] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=4 00:31:13.360 [2024-12-10 12:34:19.926532] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:13.360 [2024-12-10 12:34:19.926537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926545] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926550] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.360 [2024-12-10 12:34:19.926563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.360 [2024-12-10 12:34:19.926568] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926573] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=6 00:31:13.360 [2024-12-10 12:34:19.926579] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:13.360 [2024-12-10 12:34:19.926585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926594] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926599] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.360 [2024-12-10 12:34:19.926613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.360 [2024-12-10 12:34:19.926619] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926624] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=7 00:31:13.360 [2024-12-10 12:34:19.926630] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.360 [2024-12-10 12:34:19.926635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926643] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926648] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.360 [2024-12-10 12:34:19.926664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.360 [2024-12-10 12:34:19.926668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.360 [2024-12-10 12:34:19.926695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.360 [2024-12-10 12:34:19.926711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.360 [2024-12-10 12:34:19.926715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.360 [2024-12-10 12:34:19.926732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.360 [2024-12-10 12:34:19.926740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.360 [2024-12-10 12:34:19.926744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500001db80 00:31:13.360 [2024-12-10 12:34:19.926758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.360 [2024-12-10 12:34:19.926766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.360 [2024-12-10 12:34:19.926770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.360 [2024-12-10 12:34:19.926775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:13.360 ===================================================== 00:31:13.360 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:13.360 ===================================================== 00:31:13.360 Controller Capabilities/Features 00:31:13.360 ================================ 00:31:13.360 Vendor ID: 8086 00:31:13.360 Subsystem Vendor ID: 8086 00:31:13.360 Serial Number: SPDK00000000000001 00:31:13.360 Model Number: SPDK bdev Controller 00:31:13.360 Firmware Version: 25.01 00:31:13.360 Recommended Arb Burst: 6 00:31:13.360 IEEE OUI Identifier: e4 d2 5c 00:31:13.360 Multi-path I/O 00:31:13.360 May have multiple subsystem ports: Yes 00:31:13.360 May have multiple controllers: Yes 00:31:13.360 Associated with SR-IOV VF: No 00:31:13.360 Max Data Transfer Size: 131072 00:31:13.360 Max Number of Namespaces: 32 00:31:13.360 Max Number of I/O Queues: 127 00:31:13.360 NVMe Specification Version (VS): 1.3 00:31:13.360 NVMe Specification Version (Identify): 1.3 00:31:13.360 Maximum Queue Entries: 128 00:31:13.360 Contiguous Queues Required: Yes 00:31:13.360 Arbitration Mechanisms Supported 00:31:13.360 Weighted Round Robin: Not Supported 00:31:13.360 Vendor Specific: Not Supported 00:31:13.360 Reset Timeout: 15000 ms 00:31:13.360 Doorbell Stride: 4 bytes 00:31:13.360 NVM Subsystem Reset: Not Supported 00:31:13.360 Command Sets Supported 00:31:13.360 NVM Command Set: Supported 00:31:13.360 Boot Partition: Not Supported 00:31:13.360 Memory Page Size Minimum: 4096 bytes 00:31:13.360 Memory Page Size Maximum: 4096 bytes 00:31:13.360 Persistent Memory Region: Not Supported 00:31:13.360 Optional Asynchronous Events Supported 00:31:13.360 Namespace Attribute Notices: Supported 00:31:13.360 Firmware Activation Notices: Not Supported 00:31:13.360 ANA Change Notices: Not Supported 00:31:13.360 PLE Aggregate Log Change Notices: Not Supported 00:31:13.360 LBA Status Info Alert Notices: Not Supported 00:31:13.360 EGE Aggregate Log Change Notices: Not Supported 00:31:13.360 Normal NVM Subsystem Shutdown event: Not Supported 00:31:13.360 Zone Descriptor Change Notices: Not Supported 00:31:13.360 Discovery Log Change Notices: Not Supported 00:31:13.360 Controller Attributes 00:31:13.360 128-bit Host Identifier: Supported 00:31:13.360 Non-Operational Permissive Mode: Not Supported 00:31:13.360 NVM Sets: Not Supported 00:31:13.360 Read Recovery Levels: Not Supported 00:31:13.360 Endurance Groups: Not Supported 00:31:13.360 Predictable Latency Mode: Not Supported 00:31:13.361 Traffic Based Keep ALive: Not Supported 00:31:13.361 Namespace Granularity: Not Supported 00:31:13.361 SQ Associations: Not Supported 00:31:13.361 UUID List: Not Supported 00:31:13.361 Multi-Domain Subsystem: Not Supported 00:31:13.361 Fixed Capacity Management: Not Supported 00:31:13.361 Variable Capacity Management: Not Supported 00:31:13.361 Delete Endurance Group: Not Supported 00:31:13.361 Delete NVM Set: Not Supported 00:31:13.361 Extended LBA Formats Supported: Not Supported 00:31:13.361 Flexible Data Placement Supported: Not Supported 00:31:13.361 00:31:13.361 Controller Memory Buffer Support 00:31:13.361 ================================ 00:31:13.361 Supported: No 00:31:13.361 00:31:13.361 Persistent Memory Region Support 00:31:13.361 ================================ 00:31:13.361 Supported: No 00:31:13.361 00:31:13.361 Admin Command Set Attributes 00:31:13.361 ============================ 00:31:13.361 Security Send/Receive: Not Supported 00:31:13.361 Format NVM: Not Supported 00:31:13.361 Firmware Activate/Download: Not Supported 00:31:13.361 Namespace Management: Not Supported 00:31:13.361 Device Self-Test: Not Supported 00:31:13.361 Directives: Not Supported 00:31:13.361 NVMe-MI: Not Supported 00:31:13.361 Virtualization Management: Not Supported 00:31:13.361 Doorbell Buffer Config: Not Supported 00:31:13.361 Get LBA Status Capability: Not Supported 00:31:13.361 Command & Feature Lockdown Capability: Not Supported 00:31:13.361 Abort Command Limit: 4 00:31:13.361 Async Event Request Limit: 4 00:31:13.361 Number of Firmware Slots: N/A 00:31:13.361 Firmware Slot 1 Read-Only: N/A 00:31:13.361 Firmware Activation Without Reset: N/A 00:31:13.361 Multiple Update Detection Support: N/A 00:31:13.361 Firmware Update Granularity: No Information Provided 00:31:13.361 Per-Namespace SMART Log: No 00:31:13.361 Asymmetric Namespace Access Log Page: Not Supported 00:31:13.361 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:13.361 Command Effects Log Page: Supported 00:31:13.361 Get Log Page Extended Data: Supported 00:31:13.361 Telemetry Log Pages: Not Supported 00:31:13.361 Persistent Event Log Pages: Not Supported 00:31:13.361 Supported Log Pages Log Page: May Support 00:31:13.361 Commands Supported & Effects Log Page: Not Supported 00:31:13.361 Feature Identifiers & Effects Log Page:May Support 00:31:13.361 NVMe-MI Commands & Effects Log Page: May Support 00:31:13.361 Data Area 4 for Telemetry Log: Not Supported 00:31:13.361 Error Log Page Entries Supported: 128 00:31:13.361 Keep Alive: Supported 00:31:13.361 Keep Alive Granularity: 10000 ms 00:31:13.361 00:31:13.361 NVM Command Set Attributes 00:31:13.361 ========================== 00:31:13.361 Submission Queue Entry Size 00:31:13.361 Max: 64 00:31:13.361 Min: 64 00:31:13.361 Completion Queue Entry Size 00:31:13.361 Max: 16 00:31:13.361 Min: 16 00:31:13.361 Number of Namespaces: 32 00:31:13.361 Compare Command: Supported 00:31:13.361 Write Uncorrectable Command: Not Supported 00:31:13.361 Dataset Management Command: Supported 00:31:13.361 Write Zeroes Command: Supported 00:31:13.361 Set Features Save Field: Not Supported 00:31:13.361 Reservations: Supported 00:31:13.361 Timestamp: Not Supported 00:31:13.361 Copy: Supported 00:31:13.361 Volatile Write Cache: Present 00:31:13.361 Atomic Write Unit (Normal): 1 00:31:13.361 Atomic Write Unit (PFail): 1 00:31:13.361 Atomic Compare & Write Unit: 1 00:31:13.361 Fused Compare & Write: Supported 00:31:13.361 Scatter-Gather List 00:31:13.361 SGL Command Set: Supported 00:31:13.361 SGL Keyed: Supported 00:31:13.361 SGL Bit Bucket Descriptor: Not Supported 00:31:13.361 SGL Metadata Pointer: Not Supported 00:31:13.361 Oversized SGL: Not Supported 00:31:13.361 SGL Metadata Address: Not Supported 00:31:13.361 SGL Offset: Supported 00:31:13.361 Transport SGL Data Block: Not Supported 00:31:13.361 Replay Protected Memory Block: Not Supported 00:31:13.361 00:31:13.361 Firmware Slot Information 00:31:13.361 ========================= 00:31:13.361 Active slot: 1 00:31:13.361 Slot 1 Firmware Revision: 25.01 00:31:13.361 00:31:13.361 00:31:13.361 Commands Supported and Effects 00:31:13.361 ============================== 00:31:13.361 Admin Commands 00:31:13.361 -------------- 00:31:13.361 Get Log Page (02h): Supported 00:31:13.361 Identify (06h): Supported 00:31:13.361 Abort (08h): Supported 00:31:13.361 Set Features (09h): Supported 00:31:13.361 Get Features (0Ah): Supported 00:31:13.361 Asynchronous Event Request (0Ch): Supported 00:31:13.361 Keep Alive (18h): Supported 00:31:13.361 I/O Commands 00:31:13.361 ------------ 00:31:13.361 Flush (00h): Supported LBA-Change 00:31:13.361 Write (01h): Supported LBA-Change 00:31:13.361 Read (02h): Supported 00:31:13.361 Compare (05h): Supported 00:31:13.361 Write Zeroes (08h): Supported LBA-Change 00:31:13.361 Dataset Management (09h): Supported LBA-Change 00:31:13.361 Copy (19h): Supported LBA-Change 00:31:13.361 00:31:13.361 Error Log 00:31:13.361 ========= 00:31:13.361 00:31:13.361 Arbitration 00:31:13.361 =========== 00:31:13.361 Arbitration Burst: 1 00:31:13.361 00:31:13.361 Power Management 00:31:13.361 ================ 00:31:13.361 Number of Power States: 1 00:31:13.361 Current Power State: Power State #0 00:31:13.361 Power State #0: 00:31:13.361 Max Power: 0.00 W 00:31:13.361 Non-Operational State: Operational 00:31:13.361 Entry Latency: Not Reported 00:31:13.361 Exit Latency: Not Reported 00:31:13.361 Relative Read Throughput: 0 00:31:13.361 Relative Read Latency: 0 00:31:13.361 Relative Write Throughput: 0 00:31:13.361 Relative Write Latency: 0 00:31:13.361 Idle Power: Not Reported 00:31:13.361 Active Power: Not Reported 00:31:13.361 Non-Operational Permissive Mode: Not Supported 00:31:13.361 00:31:13.361 Health Information 00:31:13.361 ================== 00:31:13.361 Critical Warnings: 00:31:13.361 Available Spare Space: OK 00:31:13.361 Temperature: OK 00:31:13.361 Device Reliability: OK 00:31:13.361 Read Only: No 00:31:13.361 Volatile Memory Backup: OK 00:31:13.361 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:13.361 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:13.361 Available Spare: 0% 00:31:13.361 Available Spare Threshold: 0% 00:31:13.361 Life Percentage Used:[2024-12-10 12:34:19.926909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.361 [2024-12-10 12:34:19.926918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:13.361 [2024-12-10 12:34:19.926929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-12-10 12:34:19.926946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:13.361 [2024-12-10 12:34:19.927071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.361 [2024-12-10 12:34:19.927079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.361 [2024-12-10 12:34:19.927085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.361 [2024-12-10 12:34:19.927093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:13.361 [2024-12-10 12:34:19.927136] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:13.361 [2024-12-10 12:34:19.927149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.361 [2024-12-10 12:34:19.927159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-12-10 12:34:19.927173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:13.361 [2024-12-10 12:34:19.927180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-12-10 12:34:19.927189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.927196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.362 [2024-12-10 12:34:19.927203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.927210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.362 [2024-12-10 12:34:19.927220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.927244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.927261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.927383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.927397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.927402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.927418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.927441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.927460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.927558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.927567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.927571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.927583] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:13.362 [2024-12-10 12:34:19.927590] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:13.362 [2024-12-10 12:34:19.927606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.927628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.927642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.927734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.927742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.927747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.927764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.927787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.927800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.927888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.927897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.927901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.927919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.927930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.927939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.927952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.928037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.928045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.928049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.928066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.928091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.928104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.928178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.928186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.928191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.928210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.928230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.928243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.928313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.928321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.928326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.928343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.928365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.928378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.928451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.928459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.928463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.928480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.928500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.928513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.928584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.928592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.928596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.928613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.362 [2024-12-10 12:34:19.928633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.362 [2024-12-10 12:34:19.928645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.362 [2024-12-10 12:34:19.928713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.362 [2024-12-10 12:34:19.928721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.362 [2024-12-10 12:34:19.928729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.362 [2024-12-10 12:34:19.928734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.362 [2024-12-10 12:34:19.928746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.928752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.928757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.363 [2024-12-10 12:34:19.928766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.363 [2024-12-10 12:34:19.928779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.363 [2024-12-10 12:34:19.928848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.363 [2024-12-10 12:34:19.928856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.363 [2024-12-10 12:34:19.928866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.928871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.363 [2024-12-10 12:34:19.928883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.928888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.928893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.363 [2024-12-10 12:34:19.928904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.363 [2024-12-10 12:34:19.928917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.363 [2024-12-10 12:34:19.928992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.363 [2024-12-10 12:34:19.929001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.363 [2024-12-10 12:34:19.929005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.929010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.363 [2024-12-10 12:34:19.929022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.929027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.929032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.363 [2024-12-10 12:34:19.929041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.363 [2024-12-10 12:34:19.929056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.363 [2024-12-10 12:34:19.929126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.363 [2024-12-10 12:34:19.929134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.363 [2024-12-10 12:34:19.929138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.929143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.363 [2024-12-10 12:34:19.929155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.929160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.933172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.363 [2024-12-10 12:34:19.933196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.363 [2024-12-10 12:34:19.933216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.363 [2024-12-10 12:34:19.933381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.363 [2024-12-10 12:34:19.933390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.363 [2024-12-10 12:34:19.933395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.363 [2024-12-10 12:34:19.933401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.363 [2024-12-10 12:34:19.933412] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:31:13.363 0% 00:31:13.363 Data Units Read: 0 00:31:13.363 Data Units Written: 0 00:31:13.363 Host Read Commands: 0 00:31:13.363 Host Write Commands: 0 00:31:13.363 Controller Busy Time: 0 minutes 00:31:13.363 Power Cycles: 0 00:31:13.363 Power On Hours: 0 hours 00:31:13.363 Unsafe Shutdowns: 0 00:31:13.363 Unrecoverable Media Errors: 0 00:31:13.363 Lifetime Error Log Entries: 0 00:31:13.363 Warning Temperature Time: 0 minutes 00:31:13.363 Critical Temperature Time: 0 minutes 00:31:13.363 00:31:13.363 Number of Queues 00:31:13.363 ================ 00:31:13.363 Number of I/O Submission Queues: 127 00:31:13.363 Number of I/O Completion Queues: 127 00:31:13.363 00:31:13.363 Active Namespaces 00:31:13.363 ================= 00:31:13.363 Namespace ID:1 00:31:13.363 Error Recovery Timeout: Unlimited 00:31:13.363 Command Set Identifier: NVM (00h) 00:31:13.363 Deallocate: Supported 00:31:13.363 Deallocated/Unwritten Error: Not Supported 00:31:13.363 Deallocated Read Value: Unknown 00:31:13.363 Deallocate in Write Zeroes: Not Supported 00:31:13.363 Deallocated Guard Field: 0xFFFF 00:31:13.363 Flush: Supported 00:31:13.363 Reservation: Supported 00:31:13.363 Namespace Sharing Capabilities: Multiple Controllers 00:31:13.363 Size (in LBAs): 131072 (0GiB) 00:31:13.363 Capacity (in LBAs): 131072 (0GiB) 00:31:13.363 Utilization (in LBAs): 131072 (0GiB) 00:31:13.363 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:13.363 EUI64: ABCDEF0123456789 00:31:13.363 UUID: ea9c4f2f-7465-4605-8841-f9b8592597c8 00:31:13.363 Thin Provisioning: Not Supported 00:31:13.363 Per-NS Atomic Units: Yes 00:31:13.363 Atomic Boundary Size (Normal): 0 00:31:13.363 Atomic Boundary Size (PFail): 0 00:31:13.363 Atomic Boundary Offset: 0 00:31:13.363 Maximum Single Source Range Length: 65535 00:31:13.363 Maximum Copy Length: 65535 00:31:13.363 Maximum Source Range Count: 1 00:31:13.363 NGUID/EUI64 Never Reused: No 00:31:13.363 Namespace Write Protected: No 00:31:13.363 Number of LBA Formats: 1 00:31:13.363 Current LBA Format: LBA Format #00 00:31:13.363 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:13.363 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.363 12:34:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.363 rmmod nvme_tcp 00:31:13.363 rmmod nvme_fabrics 00:31:13.363 rmmod nvme_keyring 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3804931 ']' 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3804931 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3804931 ']' 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3804931 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3804931 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3804931' 00:31:13.363 killing process with pid 3804931 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3804931 00:31:13.363 12:34:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3804931 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.739 12:34:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.275 00:31:17.275 real 0m11.023s 00:31:17.275 user 0m12.188s 00:31:17.275 sys 0m4.715s 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.275 ************************************ 00:31:17.275 END TEST nvmf_identify 00:31:17.275 ************************************ 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.275 ************************************ 00:31:17.275 START TEST nvmf_perf 00:31:17.275 ************************************ 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:17.275 * Looking for test storage... 00:31:17.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:17.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.275 --rc genhtml_branch_coverage=1 00:31:17.275 --rc genhtml_function_coverage=1 00:31:17.275 --rc genhtml_legend=1 00:31:17.275 --rc geninfo_all_blocks=1 00:31:17.275 --rc geninfo_unexecuted_blocks=1 00:31:17.275 00:31:17.275 ' 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:17.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.275 --rc genhtml_branch_coverage=1 00:31:17.275 --rc genhtml_function_coverage=1 00:31:17.275 --rc genhtml_legend=1 00:31:17.275 --rc geninfo_all_blocks=1 00:31:17.275 --rc geninfo_unexecuted_blocks=1 00:31:17.275 00:31:17.275 ' 00:31:17.275 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:17.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.275 --rc genhtml_branch_coverage=1 00:31:17.275 --rc genhtml_function_coverage=1 00:31:17.275 --rc genhtml_legend=1 00:31:17.276 --rc geninfo_all_blocks=1 00:31:17.276 --rc geninfo_unexecuted_blocks=1 00:31:17.276 00:31:17.276 ' 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:17.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.276 --rc genhtml_branch_coverage=1 00:31:17.276 --rc genhtml_function_coverage=1 00:31:17.276 --rc genhtml_legend=1 00:31:17.276 --rc geninfo_all_blocks=1 00:31:17.276 --rc geninfo_unexecuted_blocks=1 00:31:17.276 00:31:17.276 ' 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:17.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.276 12:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:22.542 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:22.542 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:22.542 Found net devices under 0000:af:00.0: cvl_0_0 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:22.542 Found net devices under 0000:af:00.1: cvl_0_1 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.542 12:34:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:31:22.542 00:31:22.542 --- 10.0.0.2 ping statistics --- 00:31:22.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.542 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:31:22.542 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:31:22.542 00:31:22.542 --- 10.0.0.1 ping statistics --- 00:31:22.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.543 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3808862 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3808862 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3808862 ']' 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.543 12:34:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:22.801 [2024-12-10 12:34:29.368518] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:22.801 [2024-12-10 12:34:29.368606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.801 [2024-12-10 12:34:29.486079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.801 [2024-12-10 12:34:29.598794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.801 [2024-12-10 12:34:29.598833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.801 [2024-12-10 12:34:29.598844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.801 [2024-12-10 12:34:29.598854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.801 [2024-12-10 12:34:29.598862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.801 [2024-12-10 12:34:29.601109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.801 [2024-12-10 12:34:29.601190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.801 [2024-12-10 12:34:29.601244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.801 [2024-12-10 12:34:29.601254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:23.369 12:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.369 12:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:23.369 12:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:23.369 12:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.369 12:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:23.626 12:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.626 12:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:23.626 12:34:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:26.906 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:26.906 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:26.906 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:31:26.906 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:27.163 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:27.163 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:31:27.163 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:27.164 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:27.164 12:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:27.164 [2024-12-10 12:34:33.974959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.421 12:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:27.421 12:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:27.421 12:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.678 12:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:27.678 12:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:27.936 12:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.194 [2024-12-10 12:34:34.829707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.194 12:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:28.457 12:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:31:28.457 12:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:28.457 12:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:28.457 12:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:29.830 Initializing NVMe Controllers 00:31:29.830 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:31:29.830 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:31:29.830 Initialization complete. Launching workers. 00:31:29.830 ======================================================== 00:31:29.830 Latency(us) 00:31:29.830 Device Information : IOPS MiB/s Average min max 00:31:29.830 PCIE (0000:5e:00.0) NSID 1 from core 0: 91151.69 356.06 350.45 42.88 5227.59 00:31:29.830 ======================================================== 00:31:29.830 Total : 91151.69 356.06 350.45 42.88 5227.59 00:31:29.830 00:31:29.830 12:34:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.203 Initializing NVMe Controllers 00:31:31.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:31.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:31.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:31.203 Initialization complete. Launching workers. 00:31:31.203 ======================================================== 00:31:31.203 Latency(us) 00:31:31.203 Device Information : IOPS MiB/s Average min max 00:31:31.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.68 0.36 11072.19 129.36 45675.04 00:31:31.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.77 0.25 15916.47 6914.18 47936.96 00:31:31.203 ======================================================== 00:31:31.203 Total : 156.45 0.61 13077.78 129.36 47936.96 00:31:31.203 00:31:31.203 12:34:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:32.576 Initializing NVMe Controllers 00:31:32.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:32.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:32.576 Initialization complete. Launching workers. 00:31:32.576 ======================================================== 00:31:32.576 Latency(us) 00:31:32.576 Device Information : IOPS MiB/s Average min max 00:31:32.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9330.62 36.45 3428.54 411.49 7671.31 00:31:32.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3838.84 15.00 8373.24 5807.28 16056.53 00:31:32.576 ======================================================== 00:31:32.576 Total : 13169.46 51.44 4869.90 411.49 16056.53 00:31:32.576 00:31:32.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:32.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:32.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:35.855 Initializing NVMe Controllers 00:31:35.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.855 Controller IO queue size 128, less than required. 00:31:35.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.855 Controller IO queue size 128, less than required. 00:31:35.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:35.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:35.855 Initialization complete. Launching workers. 00:31:35.855 ======================================================== 00:31:35.855 Latency(us) 00:31:35.855 Device Information : IOPS MiB/s Average min max 00:31:35.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1525.91 381.48 87992.04 47649.04 326639.68 00:31:35.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 543.97 135.99 253331.06 110891.61 573157.05 00:31:35.855 ======================================================== 00:31:35.855 Total : 2069.88 517.47 131443.45 47649.04 573157.05 00:31:35.855 00:31:35.855 12:34:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:35.855 No valid NVMe controllers or AIO or URING devices found 00:31:35.855 Initializing NVMe Controllers 00:31:35.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.855 Controller IO queue size 128, less than required. 00:31:35.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.855 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:35.855 Controller IO queue size 128, less than required. 00:31:35.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.855 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:35.855 WARNING: Some requested NVMe devices were skipped 00:31:35.855 12:34:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:39.140 Initializing NVMe Controllers 00:31:39.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.140 Controller IO queue size 128, less than required. 00:31:39.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.140 Controller IO queue size 128, less than required. 00:31:39.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:39.140 Initialization complete. Launching workers. 00:31:39.140 00:31:39.140 ==================== 00:31:39.140 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:39.140 TCP transport: 00:31:39.140 polls: 8008 00:31:39.140 idle_polls: 4910 00:31:39.140 sock_completions: 3098 00:31:39.140 nvme_completions: 5569 00:31:39.140 submitted_requests: 8344 00:31:39.140 queued_requests: 1 00:31:39.140 00:31:39.140 ==================== 00:31:39.140 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:39.140 TCP transport: 00:31:39.140 polls: 11313 00:31:39.140 idle_polls: 8504 00:31:39.140 sock_completions: 2809 00:31:39.140 nvme_completions: 5571 00:31:39.140 submitted_requests: 8372 00:31:39.140 queued_requests: 1 00:31:39.140 ======================================================== 00:31:39.140 Latency(us) 00:31:39.140 Device Information : IOPS MiB/s Average min max 00:31:39.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1390.91 347.73 99593.44 56302.09 490410.76 00:31:39.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1391.40 347.85 93942.10 45786.45 415457.06 00:31:39.140 ======================================================== 00:31:39.140 Total : 2782.31 695.58 96767.27 45786.45 490410.76 00:31:39.140 00:31:39.140 12:34:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:39.140 12:34:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.140 12:34:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:39.140 12:34:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:31:39.141 12:34:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:42.420 12:34:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3dd4f655-bed4-462b-9b7a-3f8568cf5a97 00:31:42.420 12:34:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3dd4f655-bed4-462b-9b7a-3f8568cf5a97 00:31:42.420 12:34:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3dd4f655-bed4-462b-9b7a-3f8568cf5a97 00:31:42.420 12:34:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:42.420 12:34:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:42.420 12:34:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:42.420 12:34:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:42.420 { 00:31:42.420 "uuid": "3dd4f655-bed4-462b-9b7a-3f8568cf5a97", 00:31:42.420 "name": "lvs_0", 00:31:42.420 "base_bdev": "Nvme0n1", 00:31:42.420 "total_data_clusters": 238234, 00:31:42.420 "free_clusters": 238234, 00:31:42.420 "block_size": 512, 00:31:42.420 "cluster_size": 4194304 00:31:42.420 } 00:31:42.420 ]' 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3dd4f655-bed4-462b-9b7a-3f8568cf5a97") .free_clusters' 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3dd4f655-bed4-462b-9b7a-3f8568cf5a97") .cluster_size' 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:42.420 952936 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:42.420 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3dd4f655-bed4-462b-9b7a-3f8568cf5a97 lbd_0 20480 00:31:42.985 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=afe0e62e-6755-43a9-9291-a9040c6a1656 00:31:42.985 12:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore afe0e62e-6755-43a9-9291-a9040c6a1656 lvs_n_0 00:31:43.917 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=9e51ee7c-beab-4547-ac15-fdcd6d14b3ed 00:31:43.917 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 9e51ee7c-beab-4547-ac15-fdcd6d14b3ed 00:31:43.917 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=9e51ee7c-beab-4547-ac15-fdcd6d14b3ed 00:31:43.917 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:43.917 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:43.917 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:43.917 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:43.917 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:43.917 { 00:31:43.917 "uuid": "3dd4f655-bed4-462b-9b7a-3f8568cf5a97", 00:31:43.917 "name": "lvs_0", 00:31:43.917 "base_bdev": "Nvme0n1", 00:31:43.917 "total_data_clusters": 238234, 00:31:43.917 "free_clusters": 233114, 00:31:43.917 "block_size": 512, 00:31:43.917 "cluster_size": 4194304 00:31:43.917 }, 00:31:43.917 { 00:31:43.917 "uuid": "9e51ee7c-beab-4547-ac15-fdcd6d14b3ed", 00:31:43.918 "name": "lvs_n_0", 00:31:43.918 "base_bdev": "afe0e62e-6755-43a9-9291-a9040c6a1656", 00:31:43.918 "total_data_clusters": 5114, 00:31:43.918 "free_clusters": 5114, 00:31:43.918 "block_size": 512, 00:31:43.918 "cluster_size": 4194304 00:31:43.918 } 00:31:43.918 ]' 00:31:43.918 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9e51ee7c-beab-4547-ac15-fdcd6d14b3ed") .free_clusters' 00:31:43.918 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:43.918 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9e51ee7c-beab-4547-ac15-fdcd6d14b3ed") .cluster_size' 00:31:43.918 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:43.918 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:43.918 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:43.918 20456 00:31:43.918 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:43.918 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9e51ee7c-beab-4547-ac15-fdcd6d14b3ed lbd_nest_0 20456 00:31:44.175 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=106e80d3-81be-4b9d-aa39-c9fa2bded561 00:31:44.175 12:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:44.433 12:34:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:44.433 12:34:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 106e80d3-81be-4b9d-aa39-c9fa2bded561 00:31:44.690 12:34:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.947 12:34:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:44.947 12:34:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:44.947 12:34:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:44.947 12:34:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:44.947 12:34:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:57.143 Initializing NVMe Controllers 00:31:57.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:57.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:57.143 Initialization complete. Launching workers. 00:31:57.143 ======================================================== 00:31:57.143 Latency(us) 00:31:57.143 Device Information : IOPS MiB/s Average min max 00:31:57.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.68 0.02 21019.90 156.15 45675.08 00:31:57.143 ======================================================== 00:31:57.143 Total : 47.68 0.02 21019.90 156.15 45675.08 00:31:57.143 00:31:57.143 12:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:57.143 12:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:07.107 Initializing NVMe Controllers 00:32:07.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:07.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:07.107 Initialization complete. Launching workers. 00:32:07.107 ======================================================== 00:32:07.107 Latency(us) 00:32:07.107 Device Information : IOPS MiB/s Average min max 00:32:07.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 65.80 8.22 15238.82 5046.43 59852.67 00:32:07.107 ======================================================== 00:32:07.107 Total : 65.80 8.22 15238.82 5046.43 59852.67 00:32:07.107 00:32:07.107 12:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:07.107 12:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:07.107 12:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:17.077 Initializing NVMe Controllers 00:32:17.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:17.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:17.077 Initialization complete. Launching workers. 00:32:17.077 ======================================================== 00:32:17.077 Latency(us) 00:32:17.077 Device Information : IOPS MiB/s Average min max 00:32:17.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8205.50 4.01 3899.74 281.12 10250.60 00:32:17.077 ======================================================== 00:32:17.077 Total : 8205.50 4.01 3899.74 281.12 10250.60 00:32:17.077 00:32:17.077 12:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:17.077 12:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:27.043 Initializing NVMe Controllers 00:32:27.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:27.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:27.043 Initialization complete. Launching workers. 00:32:27.043 ======================================================== 00:32:27.043 Latency(us) 00:32:27.043 Device Information : IOPS MiB/s Average min max 00:32:27.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3879.29 484.91 8250.09 817.23 29462.87 00:32:27.043 ======================================================== 00:32:27.043 Total : 3879.29 484.91 8250.09 817.23 29462.87 00:32:27.043 00:32:27.043 12:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:27.043 12:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:27.043 12:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:37.003 Initializing NVMe Controllers 00:32:37.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:37.003 Controller IO queue size 128, less than required. 00:32:37.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:37.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:37.003 Initialization complete. Launching workers. 00:32:37.003 ======================================================== 00:32:37.003 Latency(us) 00:32:37.003 Device Information : IOPS MiB/s Average min max 00:32:37.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12959.49 6.33 9881.96 1627.60 47848.07 00:32:37.003 ======================================================== 00:32:37.003 Total : 12959.49 6.33 9881.96 1627.60 47848.07 00:32:37.003 00:32:37.003 12:35:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:37.003 12:35:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:49.197 Initializing NVMe Controllers 00:32:49.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:49.197 Controller IO queue size 128, less than required. 00:32:49.197 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:49.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:49.197 Initialization complete. Launching workers. 00:32:49.197 ======================================================== 00:32:49.197 Latency(us) 00:32:49.197 Device Information : IOPS MiB/s Average min max 00:32:49.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.41 150.80 106949.22 23620.72 219691.21 00:32:49.197 ======================================================== 00:32:49.197 Total : 1206.41 150.80 106949.22 23620.72 219691.21 00:32:49.197 00:32:49.197 12:35:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:49.197 12:35:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 106e80d3-81be-4b9d-aa39-c9fa2bded561 00:32:49.197 12:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:49.197 12:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete afe0e62e-6755-43a9-9291-a9040c6a1656 00:32:49.197 12:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.455 rmmod nvme_tcp 00:32:49.455 rmmod nvme_fabrics 00:32:49.455 rmmod nvme_keyring 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3808862 ']' 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3808862 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3808862 ']' 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3808862 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3808862 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3808862' 00:32:49.455 killing process with pid 3808862 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3808862 00:32:49.455 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3808862 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.983 12:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.945 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.945 00:32:53.945 real 1m37.024s 00:32:53.945 user 5m48.597s 00:32:53.945 sys 0m16.679s 00:32:53.945 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.946 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:53.946 ************************************ 00:32:53.946 END TEST nvmf_perf 00:32:53.946 ************************************ 00:32:53.946 12:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:53.946 12:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:53.946 12:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.946 12:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.946 ************************************ 00:32:53.946 START TEST nvmf_fio_host 00:32:53.946 ************************************ 00:32:53.946 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:54.250 * Looking for test storage... 00:32:54.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:54.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.250 --rc genhtml_branch_coverage=1 00:32:54.250 --rc genhtml_function_coverage=1 00:32:54.250 --rc genhtml_legend=1 00:32:54.250 --rc geninfo_all_blocks=1 00:32:54.250 --rc geninfo_unexecuted_blocks=1 00:32:54.250 00:32:54.250 ' 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:54.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.250 --rc genhtml_branch_coverage=1 00:32:54.250 --rc genhtml_function_coverage=1 00:32:54.250 --rc genhtml_legend=1 00:32:54.250 --rc geninfo_all_blocks=1 00:32:54.250 --rc geninfo_unexecuted_blocks=1 00:32:54.250 00:32:54.250 ' 00:32:54.250 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.251 --rc genhtml_branch_coverage=1 00:32:54.251 --rc genhtml_function_coverage=1 00:32:54.251 --rc genhtml_legend=1 00:32:54.251 --rc geninfo_all_blocks=1 00:32:54.251 --rc geninfo_unexecuted_blocks=1 00:32:54.251 00:32:54.251 ' 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.251 --rc genhtml_branch_coverage=1 00:32:54.251 --rc genhtml_function_coverage=1 00:32:54.251 --rc genhtml_legend=1 00:32:54.251 --rc geninfo_all_blocks=1 00:32:54.251 --rc geninfo_unexecuted_blocks=1 00:32:54.251 00:32:54.251 ' 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:54.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:54.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:59.573 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:59.573 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:59.573 Found net devices under 0000:af:00.0: cvl_0_0 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:59.573 Found net devices under 0000:af:00.1: cvl_0_1 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:59.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:32:59.573 00:32:59.573 --- 10.0.0.2 ping statistics --- 00:32:59.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.573 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:32:59.573 00:32:59.573 --- 10.0.0.1 ping statistics --- 00:32:59.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.573 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.573 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3826331 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3826331 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3826331 ']' 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.574 12:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.833 [2024-12-10 12:36:06.441855] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:32:59.833 [2024-12-10 12:36:06.441944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.833 [2024-12-10 12:36:06.560039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:00.092 [2024-12-10 12:36:06.665716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.092 [2024-12-10 12:36:06.665762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.092 [2024-12-10 12:36:06.665773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.092 [2024-12-10 12:36:06.665783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.092 [2024-12-10 12:36:06.665791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.092 [2024-12-10 12:36:06.668156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.092 [2024-12-10 12:36:06.668242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.092 [2024-12-10 12:36:06.668271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.092 [2024-12-10 12:36:06.668280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.660 12:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.660 12:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:33:00.660 12:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:00.660 [2024-12-10 12:36:07.443862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.919 12:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:00.919 12:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.919 12:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.919 12:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:01.178 Malloc1 00:33:01.178 12:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:01.437 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:01.437 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:01.695 [2024-12-10 12:36:08.404774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.695 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:01.954 12:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:02.213 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:02.213 fio-3.35 00:33:02.213 Starting 1 thread 00:33:04.747 00:33:04.747 test: (groupid=0, jobs=1): err= 0: pid=3827299: Tue Dec 10 12:36:11 2024 00:33:04.747 read: IOPS=9894, BW=38.6MiB/s (40.5MB/s)(77.5MiB/2006msec) 00:33:04.747 slat (nsec): min=1696, max=211629, avg=1946.11, stdev=2150.98 00:33:04.747 clat (usec): min=3032, max=12151, avg=7070.55, stdev=588.21 00:33:04.747 lat (usec): min=3069, max=12153, avg=7072.49, stdev=588.08 00:33:04.747 clat percentiles (usec): 00:33:04.747 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6587], 00:33:04.747 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7242], 00:33:04.747 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 7963], 00:33:04.747 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[10683], 99.95th=[11338], 00:33:04.747 | 99.99th=[12125] 00:33:04.747 bw ( KiB/s): min=38536, max=40168, per=99.99%, avg=39574.00, stdev=727.56, samples=4 00:33:04.747 iops : min= 9634, max=10042, avg=9893.50, stdev=181.89, samples=4 00:33:04.747 write: IOPS=9916, BW=38.7MiB/s (40.6MB/s)(77.7MiB/2006msec); 0 zone resets 00:33:04.747 slat (nsec): min=1747, max=193387, avg=2016.19, stdev=1575.53 00:33:04.747 clat (usec): min=2263, max=10767, avg=5781.76, stdev=486.42 00:33:04.747 lat (usec): min=2278, max=10769, avg=5783.78, stdev=486.32 00:33:04.747 clat percentiles (usec): 00:33:04.747 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:33:04.747 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:33:04.747 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:33:04.747 | 99.00th=[ 6980], 99.50th=[ 7767], 99.90th=[ 9110], 99.95th=[ 9503], 00:33:04.747 | 99.99th=[10814] 00:33:04.747 bw ( KiB/s): min=39120, max=40336, per=99.97%, avg=39652.00, stdev=570.85, samples=4 00:33:04.747 iops : min= 9780, max=10084, avg=9913.00, stdev=142.71, samples=4 00:33:04.747 lat (msec) : 4=0.10%, 10=99.81%, 20=0.09% 00:33:04.747 cpu : usr=76.01%, sys=22.84%, ctx=91, majf=0, minf=1502 00:33:04.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:04.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.747 issued rwts: total=19848,19892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.747 00:33:04.747 Run status group 0 (all jobs): 00:33:04.747 READ: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=77.5MiB (81.3MB), run=2006-2006msec 00:33:04.747 WRITE: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=77.7MiB (81.5MB), run=2006-2006msec 00:33:05.006 ----------------------------------------------------- 00:33:05.006 Suppressions used: 00:33:05.006 count bytes template 00:33:05.006 1 57 /usr/src/fio/parse.c 00:33:05.006 1 8 libtcmalloc_minimal.so 00:33:05.006 ----------------------------------------------------- 00:33:05.006 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:05.006 12:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:05.265 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:05.265 fio-3.35 00:33:05.265 Starting 1 thread 00:33:07.799 00:33:07.799 test: (groupid=0, jobs=1): err= 0: pid=3828055: Tue Dec 10 12:36:14 2024 00:33:07.799 read: IOPS=9436, BW=147MiB/s (155MB/s)(296MiB/2005msec) 00:33:07.799 slat (nsec): min=2643, max=97431, avg=3118.27, stdev=1466.59 00:33:07.799 clat (usec): min=2057, max=13992, avg=7688.28, stdev=1731.63 00:33:07.799 lat (usec): min=2059, max=13995, avg=7691.39, stdev=1731.66 00:33:07.799 clat percentiles (usec): 00:33:07.799 | 1.00th=[ 4228], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6194], 00:33:07.799 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7701], 60.00th=[ 8094], 00:33:07.799 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[10814], 00:33:07.799 | 99.00th=[12256], 99.50th=[12780], 99.90th=[13698], 99.95th=[13829], 00:33:07.799 | 99.99th=[13960] 00:33:07.799 bw ( KiB/s): min=68608, max=86336, per=49.47%, avg=74688.00, stdev=8158.10, samples=4 00:33:07.799 iops : min= 4288, max= 5396, avg=4668.00, stdev=509.88, samples=4 00:33:07.799 write: IOPS=5550, BW=86.7MiB/s (90.9MB/s)(153MiB/1761msec); 0 zone resets 00:33:07.799 slat (usec): min=27, max=280, avg=31.90, stdev= 5.58 00:33:07.799 clat (usec): min=5903, max=17472, avg=10188.20, stdev=1723.91 00:33:07.799 lat (usec): min=5934, max=17502, avg=10220.10, stdev=1723.95 00:33:07.799 clat percentiles (usec): 00:33:07.799 | 1.00th=[ 6915], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8717], 00:33:07.799 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:33:07.799 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12649], 95.00th=[13173], 00:33:07.799 | 99.00th=[14615], 99.50th=[15401], 99.90th=[16712], 99.95th=[17171], 00:33:07.799 | 99.99th=[17433] 00:33:07.799 bw ( KiB/s): min=71968, max=89760, per=87.78%, avg=77960.00, stdev=8306.26, samples=4 00:33:07.799 iops : min= 4498, max= 5610, avg=4872.50, stdev=519.14, samples=4 00:33:07.799 lat (msec) : 4=0.42%, 10=76.37%, 20=23.22% 00:33:07.799 cpu : usr=84.44%, sys=14.36%, ctx=83, majf=0, minf=2386 00:33:07.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:07.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:07.799 issued rwts: total=18920,9775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:07.799 00:33:07.799 Run status group 0 (all jobs): 00:33:07.799 READ: bw=147MiB/s (155MB/s), 147MiB/s-147MiB/s (155MB/s-155MB/s), io=296MiB (310MB), run=2005-2005msec 00:33:07.799 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=153MiB (160MB), run=1761-1761msec 00:33:08.058 ----------------------------------------------------- 00:33:08.058 Suppressions used: 00:33:08.058 count bytes template 00:33:08.058 1 57 /usr/src/fio/parse.c 00:33:08.058 215 20640 /usr/src/fio/iolog.c 00:33:08.058 1 8 libtcmalloc_minimal.so 00:33:08.058 ----------------------------------------------------- 00:33:08.058 00:33:08.058 12:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:08.317 12:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:33:11.603 Nvme0n1 00:33:11.603 12:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=b6b760a7-2264-4ca8-abe0-7945508aa7f2 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb b6b760a7-2264-4ca8-abe0-7945508aa7f2 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=b6b760a7-2264-4ca8-abe0-7945508aa7f2 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:14.891 { 00:33:14.891 "uuid": "b6b760a7-2264-4ca8-abe0-7945508aa7f2", 00:33:14.891 "name": "lvs_0", 00:33:14.891 "base_bdev": "Nvme0n1", 00:33:14.891 "total_data_clusters": 930, 00:33:14.891 "free_clusters": 930, 00:33:14.891 "block_size": 512, 00:33:14.891 "cluster_size": 1073741824 00:33:14.891 } 00:33:14.891 ]' 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b6b760a7-2264-4ca8-abe0-7945508aa7f2") .free_clusters' 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b6b760a7-2264-4ca8-abe0-7945508aa7f2") .cluster_size' 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:33:14.891 952320 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:33:14.891 9d558586-7631-4a83-baab-5797516bfd87 00:33:14.891 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:15.151 12:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:15.409 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:15.667 12:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:15.925 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:15.925 fio-3.35 00:33:15.925 Starting 1 thread 00:33:18.455 00:33:18.455 test: (groupid=0, jobs=1): err= 0: pid=3829752: Tue Dec 10 12:36:25 2024 00:33:18.455 read: IOPS=6937, BW=27.1MiB/s (28.4MB/s)(54.4MiB/2007msec) 00:33:18.455 slat (nsec): min=1659, max=110102, avg=1881.66, stdev=1360.69 00:33:18.455 clat (usec): min=691, max=170612, avg=10102.01, stdev=10968.77 00:33:18.455 lat (usec): min=693, max=170642, avg=10103.89, stdev=10968.97 00:33:18.455 clat percentiles (msec): 00:33:18.455 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:33:18.455 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:33:18.455 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:33:18.455 | 99.00th=[ 12], 99.50th=[ 16], 99.90th=[ 171], 99.95th=[ 171], 00:33:18.455 | 99.99th=[ 171] 00:33:18.455 bw ( KiB/s): min=19768, max=30552, per=99.86%, avg=27710.00, stdev=5298.32, samples=4 00:33:18.455 iops : min= 4942, max= 7638, avg=6927.50, stdev=1324.58, samples=4 00:33:18.455 write: IOPS=6943, BW=27.1MiB/s (28.4MB/s)(54.4MiB/2007msec); 0 zone resets 00:33:18.455 slat (nsec): min=1733, max=85134, avg=1966.82, stdev=877.91 00:33:18.455 clat (usec): min=228, max=169023, avg=8242.82, stdev=10251.77 00:33:18.455 lat (usec): min=230, max=169028, avg=8244.78, stdev=10251.97 00:33:18.455 clat percentiles (msec): 00:33:18.455 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:33:18.455 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:33:18.455 | 70.00th=[ 8], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:33:18.455 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:33:18.455 | 99.99th=[ 169] 00:33:18.455 bw ( KiB/s): min=20648, max=30208, per=99.93%, avg=27754.00, stdev=4737.72, samples=4 00:33:18.455 iops : min= 5162, max= 7552, avg=6938.50, stdev=1184.43, samples=4 00:33:18.455 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:33:18.455 lat (msec) : 2=0.03%, 4=0.19%, 10=89.22%, 20=10.07%, 250=0.46% 00:33:18.455 cpu : usr=76.07%, sys=22.93%, ctx=121, majf=0, minf=1502 00:33:18.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:18.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:18.455 issued rwts: total=13923,13936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:18.455 00:33:18.455 Run status group 0 (all jobs): 00:33:18.455 READ: bw=27.1MiB/s (28.4MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=54.4MiB (57.0MB), run=2007-2007msec 00:33:18.455 WRITE: bw=27.1MiB/s (28.4MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=54.4MiB (57.1MB), run=2007-2007msec 00:33:18.713 ----------------------------------------------------- 00:33:18.713 Suppressions used: 00:33:18.713 count bytes template 00:33:18.713 1 58 /usr/src/fio/parse.c 00:33:18.713 1 8 libtcmalloc_minimal.so 00:33:18.713 ----------------------------------------------------- 00:33:18.713 00:33:18.713 12:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:18.970 12:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d9629782-792b-47b5-8875-29d24cfd8abc 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d9629782-792b-47b5-8875-29d24cfd8abc 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d9629782-792b-47b5-8875-29d24cfd8abc 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:20.341 { 00:33:20.341 "uuid": "b6b760a7-2264-4ca8-abe0-7945508aa7f2", 00:33:20.341 "name": "lvs_0", 00:33:20.341 "base_bdev": "Nvme0n1", 00:33:20.341 "total_data_clusters": 930, 00:33:20.341 "free_clusters": 0, 00:33:20.341 "block_size": 512, 00:33:20.341 "cluster_size": 1073741824 00:33:20.341 }, 00:33:20.341 { 00:33:20.341 "uuid": "d9629782-792b-47b5-8875-29d24cfd8abc", 00:33:20.341 "name": "lvs_n_0", 00:33:20.341 "base_bdev": "9d558586-7631-4a83-baab-5797516bfd87", 00:33:20.341 "total_data_clusters": 237847, 00:33:20.341 "free_clusters": 237847, 00:33:20.341 "block_size": 512, 00:33:20.341 "cluster_size": 4194304 00:33:20.341 } 00:33:20.341 ]' 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d9629782-792b-47b5-8875-29d24cfd8abc") .free_clusters' 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:33:20.341 12:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d9629782-792b-47b5-8875-29d24cfd8abc") .cluster_size' 00:33:20.341 12:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:20.341 12:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:33:20.341 12:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:33:20.341 951388 00:33:20.341 12:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:21.275 5749acaa-227e-4c77-b43b-1f42138ddae2 00:33:21.275 12:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:21.532 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:21.533 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:21.791 12:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:22.048 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:22.048 fio-3.35 00:33:22.048 Starting 1 thread 00:33:24.572 00:33:24.572 test: (groupid=0, jobs=1): err= 0: pid=3830880: Tue Dec 10 12:36:31 2024 00:33:24.572 read: IOPS=6711, BW=26.2MiB/s (27.5MB/s)(52.6MiB/2008msec) 00:33:24.572 slat (nsec): min=1706, max=100204, avg=1999.21, stdev=1373.23 00:33:24.572 clat (usec): min=3743, max=17224, avg=10471.73, stdev=1001.99 00:33:24.572 lat (usec): min=3748, max=17227, avg=10473.73, stdev=1001.90 00:33:24.572 clat percentiles (usec): 00:33:24.572 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:33:24.572 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:33:24.572 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:33:24.572 | 99.00th=[13435], 99.50th=[13960], 99.90th=[15270], 99.95th=[16909], 00:33:24.572 | 99.99th=[17171] 00:33:24.572 bw ( KiB/s): min=25944, max=27168, per=99.85%, avg=26806.00, stdev=581.54, samples=4 00:33:24.572 iops : min= 6486, max= 6792, avg=6701.50, stdev=145.39, samples=4 00:33:24.572 write: IOPS=6712, BW=26.2MiB/s (27.5MB/s)(52.7MiB/2008msec); 0 zone resets 00:33:24.572 slat (nsec): min=1742, max=98825, avg=2045.83, stdev=1096.08 00:33:24.572 clat (usec): min=1719, max=15254, avg=8491.66, stdev=798.16 00:33:24.572 lat (usec): min=1725, max=15256, avg=8493.70, stdev=798.09 00:33:24.572 clat percentiles (usec): 00:33:24.572 | 1.00th=[ 6718], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7898], 00:33:24.572 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:33:24.572 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:33:24.572 | 99.00th=[10421], 99.50th=[11076], 99.90th=[14091], 99.95th=[15008], 00:33:24.572 | 99.99th=[15270] 00:33:24.572 bw ( KiB/s): min=26304, max=27328, per=100.00%, avg=26852.00, stdev=424.20, samples=4 00:33:24.572 iops : min= 6576, max= 6832, avg=6713.00, stdev=106.05, samples=4 00:33:24.572 lat (msec) : 2=0.01%, 4=0.10%, 10=63.97%, 20=35.92% 00:33:24.572 cpu : usr=74.29%, sys=24.71%, ctx=111, majf=0, minf=1502 00:33:24.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:24.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:24.572 issued rwts: total=13477,13479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:24.572 00:33:24.572 Run status group 0 (all jobs): 00:33:24.572 READ: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=52.6MiB (55.2MB), run=2008-2008msec 00:33:24.572 WRITE: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=52.7MiB (55.2MB), run=2008-2008msec 00:33:24.830 ----------------------------------------------------- 00:33:24.830 Suppressions used: 00:33:24.830 count bytes template 00:33:24.830 1 58 /usr/src/fio/parse.c 00:33:24.830 1 8 libtcmalloc_minimal.so 00:33:24.830 ----------------------------------------------------- 00:33:24.830 00:33:24.830 12:36:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:25.087 12:36:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:25.087 12:36:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:29.270 12:36:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:29.527 12:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:32.807 12:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:32.807 12:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.706 rmmod nvme_tcp 00:33:34.706 rmmod nvme_fabrics 00:33:34.706 rmmod nvme_keyring 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3826331 ']' 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3826331 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3826331 ']' 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3826331 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3826331 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3826331' 00:33:34.706 killing process with pid 3826331 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3826331 00:33:34.706 12:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3826331 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.081 12:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.983 00:33:37.983 real 0m43.916s 00:33:37.983 user 2m56.223s 00:33:37.983 sys 0m10.136s 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.983 ************************************ 00:33:37.983 END TEST nvmf_fio_host 00:33:37.983 ************************************ 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.983 ************************************ 00:33:37.983 START TEST nvmf_failover 00:33:37.983 ************************************ 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:37.983 * Looking for test storage... 00:33:37.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:37.983 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:38.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.241 --rc genhtml_branch_coverage=1 00:33:38.241 --rc genhtml_function_coverage=1 00:33:38.241 --rc genhtml_legend=1 00:33:38.241 --rc geninfo_all_blocks=1 00:33:38.241 --rc geninfo_unexecuted_blocks=1 00:33:38.241 00:33:38.241 ' 00:33:38.241 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:38.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.241 --rc genhtml_branch_coverage=1 00:33:38.241 --rc genhtml_function_coverage=1 00:33:38.241 --rc genhtml_legend=1 00:33:38.241 --rc geninfo_all_blocks=1 00:33:38.241 --rc geninfo_unexecuted_blocks=1 00:33:38.241 00:33:38.242 ' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:38.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.242 --rc genhtml_branch_coverage=1 00:33:38.242 --rc genhtml_function_coverage=1 00:33:38.242 --rc genhtml_legend=1 00:33:38.242 --rc geninfo_all_blocks=1 00:33:38.242 --rc geninfo_unexecuted_blocks=1 00:33:38.242 00:33:38.242 ' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:38.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.242 --rc genhtml_branch_coverage=1 00:33:38.242 --rc genhtml_function_coverage=1 00:33:38.242 --rc genhtml_legend=1 00:33:38.242 --rc geninfo_all_blocks=1 00:33:38.242 --rc geninfo_unexecuted_blocks=1 00:33:38.242 00:33:38.242 ' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.242 12:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.579 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:43.580 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:43.580 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:43.580 Found net devices under 0000:af:00.0: cvl_0_0 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:43.580 Found net devices under 0000:af:00.1: cvl_0_1 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.580 12:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:33:43.580 00:33:43.580 --- 10.0.0.2 ping statistics --- 00:33:43.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.580 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:33:43.580 00:33:43.580 --- 10.0.0.1 ping statistics --- 00:33:43.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.580 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3836236 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3836236 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3836236 ']' 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.580 12:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:43.580 [2024-12-10 12:36:50.334790] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:33:43.581 [2024-12-10 12:36:50.334879] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.839 [2024-12-10 12:36:50.454039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:43.839 [2024-12-10 12:36:50.569919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.839 [2024-12-10 12:36:50.569963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.839 [2024-12-10 12:36:50.569974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.839 [2024-12-10 12:36:50.569985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.839 [2024-12-10 12:36:50.569993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.839 [2024-12-10 12:36:50.572326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.839 [2024-12-10 12:36:50.572388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.839 [2024-12-10 12:36:50.572395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:44.403 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:44.403 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:44.403 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:44.403 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:44.403 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:44.403 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.403 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:44.660 [2024-12-10 12:36:51.363576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.660 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:44.918 Malloc0 00:33:44.918 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:45.175 12:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:45.433 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.433 [2024-12-10 12:36:52.254156] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.690 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:45.690 [2024-12-10 12:36:52.446772] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:45.690 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:45.948 [2024-12-10 12:36:52.639430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3836701 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3836701 /var/tmp/bdevperf.sock 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3836701 ']' 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:45.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:45.948 12:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:46.882 12:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.882 12:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:46.882 12:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:47.139 NVMe0n1 00:33:47.397 12:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:47.655 00:33:47.655 12:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3836937 00:33:47.655 12:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:47.655 12:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:48.598 12:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.857 [2024-12-10 12:36:55.557085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.857 [2024-12-10 12:36:55.557266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.858 [2024-12-10 12:36:55.557275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.858 [2024-12-10 12:36:55.557283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.858 [2024-12-10 12:36:55.557291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.858 [2024-12-10 12:36:55.557299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.858 [2024-12-10 12:36:55.557307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:48.858 12:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:52.140 12:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:52.398 00:33:52.398 12:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:52.398 [2024-12-10 12:36:59.202119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.398 [2024-12-10 12:36:59.202435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.657 12:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:55.938 12:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.938 [2024-12-10 12:37:02.416385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.938 12:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:56.872 12:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:56.872 [2024-12-10 12:37:03.635159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 [2024-12-10 12:37:03.635527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:56.872 12:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3836937 00:34:03.427 { 00:34:03.427 "results": [ 00:34:03.427 { 00:34:03.427 "job": "NVMe0n1", 00:34:03.427 "core_mask": "0x1", 00:34:03.427 "workload": "verify", 00:34:03.427 "status": "finished", 00:34:03.427 "verify_range": { 00:34:03.427 "start": 0, 00:34:03.427 "length": 16384 00:34:03.427 }, 00:34:03.427 "queue_depth": 128, 00:34:03.427 "io_size": 4096, 00:34:03.427 "runtime": 15.007091, 00:34:03.427 "iops": 9574.87363806883, 00:34:03.427 "mibps": 37.401850148706366, 00:34:03.427 "io_failed": 11557, 00:34:03.427 "io_timeout": 0, 00:34:03.427 "avg_latency_us": 12348.722397687508, 00:34:03.427 "min_latency_us": 479.81714285714287, 00:34:03.427 "max_latency_us": 24341.942857142858 00:34:03.427 } 00:34:03.427 ], 00:34:03.427 "core_count": 1 00:34:03.427 } 00:34:03.427 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3836701 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3836701 ']' 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3836701 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3836701 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3836701' 00:34:03.428 killing process with pid 3836701 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3836701 00:34:03.428 12:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3836701 00:34:03.693 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:03.693 [2024-12-10 12:36:52.739471] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:34:03.693 [2024-12-10 12:36:52.739567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836701 ] 00:34:03.693 [2024-12-10 12:36:52.853922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.693 [2024-12-10 12:36:52.966098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.693 Running I/O for 15 seconds... 00:34:03.693 9521.00 IOPS, 37.19 MiB/s [2024-12-10T11:37:10.519Z] [2024-12-10 12:36:55.559386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.693 [2024-12-10 12:36:55.559930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.693 [2024-12-10 12:36:55.559939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.559950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.559962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.559973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.559983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.559994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.694 [2024-12-10 12:36:55.560778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.694 [2024-12-10 12:36:55.560787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.560983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.560993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.695 [2024-12-10 12:36:55.561322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84488 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84496 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84504 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84512 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84520 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84528 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84536 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84544 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.695 [2024-12-10 12:36:55.561656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84552 len:8 PRP1 0x0 PRP2 0x0 00:34:03.695 [2024-12-10 12:36:55.561665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.695 [2024-12-10 12:36:55.561674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.695 [2024-12-10 12:36:55.561681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84560 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84568 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84576 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84584 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84592 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84600 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84608 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84616 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84624 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.561970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.561978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.561986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84632 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.561996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84640 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84648 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84656 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84664 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84672 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84680 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84688 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84696 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84704 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84712 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84720 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84728 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84736 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.696 [2024-12-10 12:36:55.562450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.696 [2024-12-10 12:36:55.562457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84744 len:8 PRP1 0x0 PRP2 0x0 00:34:03.696 [2024-12-10 12:36:55.562466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.696 [2024-12-10 12:36:55.562475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.697 [2024-12-10 12:36:55.562482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.697 [2024-12-10 12:36:55.562490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84752 len:8 PRP1 0x0 PRP2 0x0 00:34:03.697 [2024-12-10 12:36:55.562499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.562507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.697 [2024-12-10 12:36:55.562516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.697 [2024-12-10 12:36:55.562523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84760 len:8 PRP1 0x0 PRP2 0x0 00:34:03.697 [2024-12-10 12:36:55.562532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.562541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.697 [2024-12-10 12:36:55.562548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.697 [2024-12-10 12:36:55.562556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84768 len:8 PRP1 0x0 PRP2 0x0 00:34:03.697 [2024-12-10 12:36:55.562565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.562573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.697 [2024-12-10 12:36:55.562581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.697 [2024-12-10 12:36:55.562588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84776 len:8 PRP1 0x0 PRP2 0x0 00:34:03.697 [2024-12-10 12:36:55.562597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.562607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.697 [2024-12-10 12:36:55.562615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.697 [2024-12-10 12:36:55.562622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84784 len:8 PRP1 0x0 PRP2 0x0 00:34:03.697 [2024-12-10 12:36:55.562631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.562641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.697 [2024-12-10 12:36:55.562648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.697 [2024-12-10 12:36:55.562656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84792 len:8 PRP1 0x0 PRP2 0x0 00:34:03.697 [2024-12-10 12:36:55.562665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.562673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.697 [2024-12-10 12:36:55.562680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.697 [2024-12-10 12:36:55.562688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83960 len:8 PRP1 0x0 PRP2 0x0 00:34:03.697 [2024-12-10 12:36:55.562697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.574107] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:03.697 [2024-12-10 12:36:55.574155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.697 [2024-12-10 12:36:55.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.574183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.697 [2024-12-10 12:36:55.574192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.574202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.697 [2024-12-10 12:36:55.574212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.574222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.697 [2024-12-10 12:36:55.574231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:55.574241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:03.697 [2024-12-10 12:36:55.574284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:03.697 [2024-12-10 12:36:55.578227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:03.697 [2024-12-10 12:36:55.612874] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:03.697 9336.00 IOPS, 36.47 MiB/s [2024-12-10T11:37:10.523Z] 9502.00 IOPS, 37.12 MiB/s [2024-12-10T11:37:10.523Z] 9579.25 IOPS, 37.42 MiB/s [2024-12-10T11:37:10.523Z] [2024-12-10 12:36:59.202616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.697 [2024-12-10 12:36:59.202979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.697 [2024-12-10 12:36:59.202990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.698 [2024-12-10 12:36:59.203778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.698 [2024-12-10 12:36:59.203799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.698 [2024-12-10 12:36:59.203819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.698 [2024-12-10 12:36:59.203830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.203839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.203850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.203860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.203870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.203880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.203891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.203900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.203911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.203921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.203932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.203941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.203952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.203961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.203972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.203981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.203992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.699 [2024-12-10 12:36:59.204661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.699 [2024-12-10 12:36:59.204670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.204985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.204995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.205015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.205035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.205055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.205077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.205098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.700 [2024-12-10 12:36:59.205118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114232 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.700 [2024-12-10 12:36:59.205211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114240 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.700 [2024-12-10 12:36:59.205246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114248 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.700 [2024-12-10 12:36:59.205279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114256 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.700 [2024-12-10 12:36:59.205311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114264 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.700 [2024-12-10 12:36:59.205346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114272 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.700 [2024-12-10 12:36:59.205381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114280 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.700 [2024-12-10 12:36:59.205413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114288 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.700 [2024-12-10 12:36:59.205445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.700 [2024-12-10 12:36:59.205453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114296 len:8 PRP1 0x0 PRP2 0x0 00:34:03.700 [2024-12-10 12:36:59.205462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205762] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:03.700 [2024-12-10 12:36:59.205793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.700 [2024-12-10 12:36:59.205804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.700 [2024-12-10 12:36:59.205825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.700 [2024-12-10 12:36:59.205845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.700 [2024-12-10 12:36:59.205855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.701 [2024-12-10 12:36:59.205864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:36:59.205873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:03.701 [2024-12-10 12:36:59.205909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:03.701 [2024-12-10 12:36:59.208968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:03.701 [2024-12-10 12:36:59.393468] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:03.701 9232.00 IOPS, 36.06 MiB/s [2024-12-10T11:37:10.527Z] 9305.17 IOPS, 36.35 MiB/s [2024-12-10T11:37:10.527Z] 9379.14 IOPS, 36.64 MiB/s [2024-12-10T11:37:10.527Z] 9423.75 IOPS, 36.81 MiB/s [2024-12-10T11:37:10.527Z] 9467.22 IOPS, 36.98 MiB/s [2024-12-10T11:37:10.527Z] [2024-12-10 12:37:03.635669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.635964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.635975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.701 [2024-12-10 12:37:03.636354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.701 [2024-12-10 12:37:03.636374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.701 [2024-12-10 12:37:03.636394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.701 [2024-12-10 12:37:03.636415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.701 [2024-12-10 12:37:03.636435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.701 [2024-12-10 12:37:03.636456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.701 [2024-12-10 12:37:03.636476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.701 [2024-12-10 12:37:03.636496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.701 [2024-12-10 12:37:03.636517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.701 [2024-12-10 12:37:03.636528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.636925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.702 [2024-12-10 12:37:03.636946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.702 [2024-12-10 12:37:03.636966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.702 [2024-12-10 12:37:03.636987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.636997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.702 [2024-12-10 12:37:03.637006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.702 [2024-12-10 12:37:03.637027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.702 [2024-12-10 12:37:03.637047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.702 [2024-12-10 12:37:03.637067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.702 [2024-12-10 12:37:03.637268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.702 [2024-12-10 12:37:03.637277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.637988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.637997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.638008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.638017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.638028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.638037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.638048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.638057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.638070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.638079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.638090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.703 [2024-12-10 12:37:03.638099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.703 [2024-12-10 12:37:03.638110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.704 [2024-12-10 12:37:03.638371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.704 [2024-12-10 12:37:03.638418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.704 [2024-12-10 12:37:03.638428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116128 len:8 PRP1 0x0 PRP2 0x0 00:34:03.704 [2024-12-10 12:37:03.638438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638742] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:03.704 [2024-12-10 12:37:03.638773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.704 [2024-12-10 12:37:03.638784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.704 [2024-12-10 12:37:03.638804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.704 [2024-12-10 12:37:03.638824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.704 [2024-12-10 12:37:03.638844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.704 [2024-12-10 12:37:03.638853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:03.704 [2024-12-10 12:37:03.641904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:03.704 [2024-12-10 12:37:03.641946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:03.704 [2024-12-10 12:37:03.680609] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:03.704 9463.90 IOPS, 36.97 MiB/s [2024-12-10T11:37:10.530Z] 9481.27 IOPS, 37.04 MiB/s [2024-12-10T11:37:10.530Z] 9510.17 IOPS, 37.15 MiB/s [2024-12-10T11:37:10.530Z] 9538.54 IOPS, 37.26 MiB/s [2024-12-10T11:37:10.530Z] 9551.50 IOPS, 37.31 MiB/s 00:34:03.704 Latency(us) 00:34:03.704 [2024-12-10T11:37:10.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.704 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:03.704 Verification LBA range: start 0x0 length 0x4000 00:34:03.704 NVMe0n1 : 15.01 9574.87 37.40 770.10 0.00 12348.72 479.82 24341.94 00:34:03.704 [2024-12-10T11:37:10.530Z] =================================================================================================================== 00:34:03.704 [2024-12-10T11:37:10.530Z] Total : 9574.87 37.40 770.10 0.00 12348.72 479.82 24341.94 00:34:03.704 Received shutdown signal, test time was about 15.000000 seconds 00:34:03.704 00:34:03.704 Latency(us) 00:34:03.704 [2024-12-10T11:37:10.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.704 [2024-12-10T11:37:10.530Z] =================================================================================================================== 00:34:03.704 [2024-12-10T11:37:10.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3839495 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3839495 /var/tmp/bdevperf.sock 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3839495 ']' 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:03.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.704 12:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:04.637 12:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.637 12:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:04.637 12:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:04.895 [2024-12-10 12:37:11.553640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:04.895 12:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:05.153 [2024-12-10 12:37:11.750275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:05.153 12:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:05.410 NVMe0n1 00:34:05.410 12:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:05.668 00:34:05.925 12:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:06.183 00:34:06.183 12:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:06.183 12:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:06.183 12:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:06.441 12:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:09.721 12:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:09.721 12:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:09.721 12:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3840500 00:34:09.721 12:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:09.721 12:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3840500 00:34:11.094 { 00:34:11.094 "results": [ 00:34:11.094 { 00:34:11.094 "job": "NVMe0n1", 00:34:11.094 "core_mask": "0x1", 00:34:11.094 "workload": "verify", 00:34:11.094 "status": "finished", 00:34:11.094 "verify_range": { 00:34:11.094 "start": 0, 00:34:11.094 "length": 16384 00:34:11.094 }, 00:34:11.094 "queue_depth": 128, 00:34:11.094 "io_size": 4096, 00:34:11.094 "runtime": 1.00698, 00:34:11.094 "iops": 9742.000834177441, 00:34:11.095 "mibps": 38.05469075850563, 00:34:11.095 "io_failed": 0, 00:34:11.095 "io_timeout": 0, 00:34:11.095 "avg_latency_us": 13086.074811902334, 00:34:11.095 "min_latency_us": 1497.9657142857143, 00:34:11.095 "max_latency_us": 11796.48 00:34:11.095 } 00:34:11.095 ], 00:34:11.095 "core_count": 1 00:34:11.095 } 00:34:11.095 12:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:11.095 [2024-12-10 12:37:10.577924] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:34:11.095 [2024-12-10 12:37:10.578022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839495 ] 00:34:11.095 [2024-12-10 12:37:10.693580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.095 [2024-12-10 12:37:10.806137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.095 [2024-12-10 12:37:13.147478] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:11.095 [2024-12-10 12:37:13.147552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.095 [2024-12-10 12:37:13.147569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.095 [2024-12-10 12:37:13.147583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.095 [2024-12-10 12:37:13.147595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.095 [2024-12-10 12:37:13.147606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.095 [2024-12-10 12:37:13.147616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.095 [2024-12-10 12:37:13.147627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.095 [2024-12-10 12:37:13.147636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.095 [2024-12-10 12:37:13.147646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:11.095 [2024-12-10 12:37:13.147694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:11.095 [2024-12-10 12:37:13.147727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:11.095 [2024-12-10 12:37:13.199508] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:11.095 Running I/O for 1 seconds... 00:34:11.095 9682.00 IOPS, 37.82 MiB/s 00:34:11.095 Latency(us) 00:34:11.095 [2024-12-10T11:37:17.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.095 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:11.095 Verification LBA range: start 0x0 length 0x4000 00:34:11.095 NVMe0n1 : 1.01 9742.00 38.05 0.00 0.00 13086.07 1497.97 11796.48 00:34:11.095 [2024-12-10T11:37:17.921Z] =================================================================================================================== 00:34:11.095 [2024-12-10T11:37:17.921Z] Total : 9742.00 38.05 0.00 0.00 13086.07 1497.97 11796.48 00:34:11.095 12:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:11.095 12:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:11.095 12:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:11.353 12:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:11.353 12:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:11.353 12:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:11.611 12:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3839495 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3839495 ']' 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3839495 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3839495 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3839495' 00:34:14.888 killing process with pid 3839495 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3839495 00:34:14.888 12:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3839495 00:34:15.822 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:15.822 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:16.081 rmmod nvme_tcp 00:34:16.081 rmmod nvme_fabrics 00:34:16.081 rmmod nvme_keyring 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3836236 ']' 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3836236 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3836236 ']' 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3836236 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3836236 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3836236' 00:34:16.081 killing process with pid 3836236 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3836236 00:34:16.081 12:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3836236 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.455 12:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.987 00:34:19.987 real 0m41.527s 00:34:19.987 user 2m14.436s 00:34:19.987 sys 0m7.797s 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:19.987 ************************************ 00:34:19.987 END TEST nvmf_failover 00:34:19.987 ************************************ 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.987 ************************************ 00:34:19.987 START TEST nvmf_host_discovery 00:34:19.987 ************************************ 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:19.987 * Looking for test storage... 00:34:19.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:19.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.987 --rc genhtml_branch_coverage=1 00:34:19.987 --rc genhtml_function_coverage=1 00:34:19.987 --rc genhtml_legend=1 00:34:19.987 --rc geninfo_all_blocks=1 00:34:19.987 --rc geninfo_unexecuted_blocks=1 00:34:19.987 00:34:19.987 ' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:19.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.987 --rc genhtml_branch_coverage=1 00:34:19.987 --rc genhtml_function_coverage=1 00:34:19.987 --rc genhtml_legend=1 00:34:19.987 --rc geninfo_all_blocks=1 00:34:19.987 --rc geninfo_unexecuted_blocks=1 00:34:19.987 00:34:19.987 ' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:19.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.987 --rc genhtml_branch_coverage=1 00:34:19.987 --rc genhtml_function_coverage=1 00:34:19.987 --rc genhtml_legend=1 00:34:19.987 --rc geninfo_all_blocks=1 00:34:19.987 --rc geninfo_unexecuted_blocks=1 00:34:19.987 00:34:19.987 ' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:19.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.987 --rc genhtml_branch_coverage=1 00:34:19.987 --rc genhtml_function_coverage=1 00:34:19.987 --rc genhtml_legend=1 00:34:19.987 --rc geninfo_all_blocks=1 00:34:19.987 --rc geninfo_unexecuted_blocks=1 00:34:19.987 00:34:19.987 ' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.987 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:19.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.988 12:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.254 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:25.255 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:25.255 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:25.255 Found net devices under 0000:af:00.0: cvl_0_0 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:25.255 Found net devices under 0000:af:00.1: cvl_0_1 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:25.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:34:25.255 00:34:25.255 --- 10.0.0.2 ping statistics --- 00:34:25.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.255 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:34:25.255 00:34:25.255 --- 10.0.0.1 ping statistics --- 00:34:25.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.255 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3844893 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3844893 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3844893 ']' 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.255 12:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.255 [2024-12-10 12:37:31.615038] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:34:25.256 [2024-12-10 12:37:31.615128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.256 [2024-12-10 12:37:31.730505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.256 [2024-12-10 12:37:31.833301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.256 [2024-12-10 12:37:31.833349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.256 [2024-12-10 12:37:31.833361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.256 [2024-12-10 12:37:31.833372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.256 [2024-12-10 12:37:31.833380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.256 [2024-12-10 12:37:31.834724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.823 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.823 [2024-12-10 12:37:32.449010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.824 [2024-12-10 12:37:32.461230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.824 null0 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.824 null1 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3845129 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3845129 /tmp/host.sock 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3845129 ']' 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:25.824 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.824 12:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.824 [2024-12-10 12:37:32.567797] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:34:25.824 [2024-12-10 12:37:32.567888] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845129 ] 00:34:26.083 [2024-12-10 12:37:32.680758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.083 [2024-12-10 12:37:32.787543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:26.650 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.909 [2024-12-10 12:37:33.704594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:26.909 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:27.167 12:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:27.733 [2024-12-10 12:37:34.450755] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:27.733 [2024-12-10 12:37:34.450787] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:27.733 [2024-12-10 12:37:34.450811] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:27.992 [2024-12-10 12:37:34.577209] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:27.992 [2024-12-10 12:37:34.719174] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:27.992 [2024-12-10 12:37:34.720373] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000325f80:1 started. 00:34:27.992 [2024-12-10 12:37:34.722074] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:27.992 [2024-12-10 12:37:34.722097] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:27.992 [2024-12-10 12:37:34.729590] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000325f80 was disconnected and freed. delete nvme_qpair. 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.251 12:37:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.251 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.510 [2024-12-10 12:37:35.112617] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000326200:1 started. 00:34:28.510 [2024-12-10 12:37:35.120368] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000326200 was disconnected and freed. delete nvme_qpair. 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.510 [2024-12-10 12:37:35.209657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:28.510 [2024-12-10 12:37:35.210343] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:28.510 [2024-12-10 12:37:35.210373] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.510 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:28.511 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.768 [2024-12-10 12:37:35.337776] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:28.768 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:28.768 12:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:28.768 [2024-12-10 12:37:35.396546] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:28.768 [2024-12-10 12:37:35.396598] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:28.768 [2024-12-10 12:37:35.396612] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:28.768 [2024-12-10 12:37:35.396620] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.710 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.711 [2024-12-10 12:37:36.465552] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:29.711 [2024-12-10 12:37:36.465583] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:29.711 [2024-12-10 12:37:36.474415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:29.711 [2024-12-10 12:37:36.474444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.711 [2024-12-10 12:37:36.474458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:29.711 [2024-12-10 12:37:36.474468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.711 [2024-12-10 12:37:36.474478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:29.711 [2024-12-10 12:37:36.474488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.711 [2024-12-10 12:37:36.474497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:29.711 [2024-12-10 12:37:36.474507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.711 [2024-12-10 12:37:36.474516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.711 [2024-12-10 12:37:36.484423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.711 [2024-12-10 12:37:36.494462] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:29.711 [2024-12-10 12:37:36.494486] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:29.711 [2024-12-10 12:37:36.494495] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:29.711 [2024-12-10 12:37:36.494505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:29.711 [2024-12-10 12:37:36.494536] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:29.711 [2024-12-10 12:37:36.494811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.711 [2024-12-10 12:37:36.494833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:29.711 [2024-12-10 12:37:36.494846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:29.711 [2024-12-10 12:37:36.494863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:29.711 [2024-12-10 12:37:36.494887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:29.711 [2024-12-10 12:37:36.494901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:29.711 [2024-12-10 12:37:36.494912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:29.711 [2024-12-10 12:37:36.494922] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:29.711 [2024-12-10 12:37:36.494930] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:29.711 [2024-12-10 12:37:36.494937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:29.711 [2024-12-10 12:37:36.504572] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:29.711 [2024-12-10 12:37:36.504594] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:29.711 [2024-12-10 12:37:36.504601] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:29.711 [2024-12-10 12:37:36.504608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:29.711 [2024-12-10 12:37:36.504630] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:29.711 [2024-12-10 12:37:36.504847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.711 [2024-12-10 12:37:36.504867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:29.711 [2024-12-10 12:37:36.504878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:29.711 [2024-12-10 12:37:36.504894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:29.711 [2024-12-10 12:37:36.504907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:29.711 [2024-12-10 12:37:36.504916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:29.711 [2024-12-10 12:37:36.504925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:29.711 [2024-12-10 12:37:36.504933] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:29.711 [2024-12-10 12:37:36.504940] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:29.711 [2024-12-10 12:37:36.504952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:29.711 [2024-12-10 12:37:36.514667] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:29.711 [2024-12-10 12:37:36.514691] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:29.711 [2024-12-10 12:37:36.514698] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:29.711 [2024-12-10 12:37:36.514705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:29.711 [2024-12-10 12:37:36.514729] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:29.711 [2024-12-10 12:37:36.514891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.711 [2024-12-10 12:37:36.514910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:29.711 [2024-12-10 12:37:36.514921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:29.711 [2024-12-10 12:37:36.514937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:29.711 [2024-12-10 12:37:36.514952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:29.711 [2024-12-10 12:37:36.514961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:29.711 [2024-12-10 12:37:36.514970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:29.711 [2024-12-10 12:37:36.514978] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:29.711 [2024-12-10 12:37:36.514985] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:29.711 [2024-12-10 12:37:36.514992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.711 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.711 [2024-12-10 12:37:36.524764] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:29.711 [2024-12-10 12:37:36.524787] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:29.711 [2024-12-10 12:37:36.524798] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:29.711 [2024-12-10 12:37:36.524805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:29.711 [2024-12-10 12:37:36.524832] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:29.711 [2024-12-10 12:37:36.524998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.711 [2024-12-10 12:37:36.525019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:29.711 [2024-12-10 12:37:36.525030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:29.711 [2024-12-10 12:37:36.525045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:29.712 [2024-12-10 12:37:36.525068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:29.712 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.712 [2024-12-10 12:37:36.525078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:29.712 [2024-12-10 12:37:36.525089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:29.712 [2024-12-10 12:37:36.525098] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:29.712 [2024-12-10 12:37:36.525105] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:29.712 [2024-12-10 12:37:36.525111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:29.712 [2024-12-10 12:37:36.534868] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:29.712 [2024-12-10 12:37:36.534893] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:29.712 [2024-12-10 12:37:36.534900] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:29.712 [2024-12-10 12:37:36.534907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:29.712 [2024-12-10 12:37:36.534929] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:29.712 [2024-12-10 12:37:36.535025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.712 [2024-12-10 12:37:36.535042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:29.712 [2024-12-10 12:37:36.535053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:29.712 [2024-12-10 12:37:36.535068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:29.712 [2024-12-10 12:37:36.535081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:29.712 [2024-12-10 12:37:36.535090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:29.712 [2024-12-10 12:37:36.535100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:29.712 [2024-12-10 12:37:36.535115] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:29.712 [2024-12-10 12:37:36.535121] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:29.712 [2024-12-10 12:37:36.535127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:29.971 [2024-12-10 12:37:36.544964] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:29.971 [2024-12-10 12:37:36.544990] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:29.971 [2024-12-10 12:37:36.544997] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:29.971 [2024-12-10 12:37:36.545003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:29.971 [2024-12-10 12:37:36.545029] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:29.971 [2024-12-10 12:37:36.545287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.971 [2024-12-10 12:37:36.545307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:29.971 [2024-12-10 12:37:36.545318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:29.971 [2024-12-10 12:37:36.545333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:29.971 [2024-12-10 12:37:36.545357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:29.971 [2024-12-10 12:37:36.545367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:29.971 [2024-12-10 12:37:36.545377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:29.971 [2024-12-10 12:37:36.545385] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:29.971 [2024-12-10 12:37:36.545392] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:29.971 [2024-12-10 12:37:36.545398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:29.971 [2024-12-10 12:37:36.551387] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:29.971 [2024-12-10 12:37:36.551418] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.971 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.972 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.231 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:30.231 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:30.231 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:30.231 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.231 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:30.231 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.231 12:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.167 [2024-12-10 12:37:37.858743] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:31.167 [2024-12-10 12:37:37.858766] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:31.167 [2024-12-10 12:37:37.858794] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:31.167 [2024-12-10 12:37:37.987195] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:31.426 [2024-12-10 12:37:38.215490] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:31.426 [2024-12-10 12:37:38.216492] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x615000327380:1 started. 00:34:31.426 [2024-12-10 12:37:38.218441] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:31.426 [2024-12-10 12:37:38.218475] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.426 [2024-12-10 12:37:38.228771] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x615000327380 was disconnected and freed. delete nvme_qpair. 00:34:31.426 request: 00:34:31.426 { 00:34:31.426 "name": "nvme", 00:34:31.426 "trtype": "tcp", 00:34:31.426 "traddr": "10.0.0.2", 00:34:31.426 "adrfam": "ipv4", 00:34:31.426 "trsvcid": "8009", 00:34:31.426 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:31.426 "wait_for_attach": true, 00:34:31.426 "method": "bdev_nvme_start_discovery", 00:34:31.426 "req_id": 1 00:34:31.426 } 00:34:31.426 Got JSON-RPC error response 00:34:31.426 response: 00:34:31.426 { 00:34:31.426 "code": -17, 00:34:31.426 "message": "File exists" 00:34:31.426 } 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.426 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.685 request: 00:34:31.685 { 00:34:31.685 "name": "nvme_second", 00:34:31.685 "trtype": "tcp", 00:34:31.685 "traddr": "10.0.0.2", 00:34:31.685 "adrfam": "ipv4", 00:34:31.685 "trsvcid": "8009", 00:34:31.685 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:31.685 "wait_for_attach": true, 00:34:31.685 "method": "bdev_nvme_start_discovery", 00:34:31.685 "req_id": 1 00:34:31.685 } 00:34:31.685 Got JSON-RPC error response 00:34:31.685 response: 00:34:31.685 { 00:34:31.685 "code": -17, 00:34:31.685 "message": "File exists" 00:34:31.685 } 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:31.685 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.686 12:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.062 [2024-12-10 12:37:39.458502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.063 [2024-12-10 12:37:39.458540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327600 with addr=10.0.0.2, port=8010 00:34:33.063 [2024-12-10 12:37:39.458593] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:33.063 [2024-12-10 12:37:39.458604] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:33.063 [2024-12-10 12:37:39.458617] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:33.997 [2024-12-10 12:37:40.460986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.997 [2024-12-10 12:37:40.461034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=8010 00:34:33.997 [2024-12-10 12:37:40.461109] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:33.997 [2024-12-10 12:37:40.461120] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:33.998 [2024-12-10 12:37:40.461130] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:34.933 [2024-12-10 12:37:41.463046] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:34.933 request: 00:34:34.933 { 00:34:34.933 "name": "nvme_second", 00:34:34.933 "trtype": "tcp", 00:34:34.933 "traddr": "10.0.0.2", 00:34:34.933 "adrfam": "ipv4", 00:34:34.933 "trsvcid": "8010", 00:34:34.933 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:34.933 "wait_for_attach": false, 00:34:34.933 "attach_timeout_ms": 3000, 00:34:34.933 "method": "bdev_nvme_start_discovery", 00:34:34.933 "req_id": 1 00:34:34.933 } 00:34:34.933 Got JSON-RPC error response 00:34:34.933 response: 00:34:34.933 { 00:34:34.933 "code": -110, 00:34:34.933 "message": "Connection timed out" 00:34:34.933 } 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3845129 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:34.933 rmmod nvme_tcp 00:34:34.933 rmmod nvme_fabrics 00:34:34.933 rmmod nvme_keyring 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3844893 ']' 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3844893 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3844893 ']' 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3844893 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844893 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844893' 00:34:34.933 killing process with pid 3844893 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3844893 00:34:34.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3844893 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.310 12:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.344 12:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:38.344 00:34:38.344 real 0m18.552s 00:34:38.344 user 0m23.902s 00:34:38.344 sys 0m5.288s 00:34:38.344 12:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.344 12:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.344 ************************************ 00:34:38.344 END TEST nvmf_host_discovery 00:34:38.344 ************************************ 00:34:38.344 12:37:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:38.344 12:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:38.344 12:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:38.344 12:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.344 ************************************ 00:34:38.344 START TEST nvmf_host_multipath_status 00:34:38.344 ************************************ 00:34:38.344 12:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:38.344 * Looking for test storage... 00:34:38.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:38.344 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.345 --rc genhtml_branch_coverage=1 00:34:38.345 --rc genhtml_function_coverage=1 00:34:38.345 --rc genhtml_legend=1 00:34:38.345 --rc geninfo_all_blocks=1 00:34:38.345 --rc geninfo_unexecuted_blocks=1 00:34:38.345 00:34:38.345 ' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.345 --rc genhtml_branch_coverage=1 00:34:38.345 --rc genhtml_function_coverage=1 00:34:38.345 --rc genhtml_legend=1 00:34:38.345 --rc geninfo_all_blocks=1 00:34:38.345 --rc geninfo_unexecuted_blocks=1 00:34:38.345 00:34:38.345 ' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.345 --rc genhtml_branch_coverage=1 00:34:38.345 --rc genhtml_function_coverage=1 00:34:38.345 --rc genhtml_legend=1 00:34:38.345 --rc geninfo_all_blocks=1 00:34:38.345 --rc geninfo_unexecuted_blocks=1 00:34:38.345 00:34:38.345 ' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.345 --rc genhtml_branch_coverage=1 00:34:38.345 --rc genhtml_function_coverage=1 00:34:38.345 --rc genhtml_legend=1 00:34:38.345 --rc geninfo_all_blocks=1 00:34:38.345 --rc geninfo_unexecuted_blocks=1 00:34:38.345 00:34:38.345 ' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:38.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:38.345 12:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:43.618 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:43.618 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:43.618 Found net devices under 0000:af:00.0: cvl_0_0 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:43.618 Found net devices under 0000:af:00.1: cvl_0_1 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.618 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:34:43.878 00:34:43.878 --- 10.0.0.2 ping statistics --- 00:34:43.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.878 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:34:43.878 00:34:43.878 --- 10.0.0.1 ping statistics --- 00:34:43.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.878 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3850327 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3850327 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3850327 ']' 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:43.878 12:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:43.878 [2024-12-10 12:37:50.585683] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:34:43.878 [2024-12-10 12:37:50.585774] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.878 [2024-12-10 12:37:50.702278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:44.136 [2024-12-10 12:37:50.810339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.136 [2024-12-10 12:37:50.810379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.136 [2024-12-10 12:37:50.810390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.136 [2024-12-10 12:37:50.810401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.136 [2024-12-10 12:37:50.810409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.136 [2024-12-10 12:37:50.812547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.136 [2024-12-10 12:37:50.812552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.704 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:44.704 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:44.704 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:44.704 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:44.704 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:44.704 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.704 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3850327 00:34:44.704 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:44.963 [2024-12-10 12:37:51.609172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.963 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:45.224 Malloc0 00:34:45.224 12:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:45.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:45.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.741 [2024-12-10 12:37:52.420533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.741 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:46.000 [2024-12-10 12:37:52.596986] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3850585 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3850585 /var/tmp/bdevperf.sock 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3850585 ']' 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:46.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.000 12:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:46.937 12:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.937 12:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:46.937 12:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:46.937 12:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:47.503 Nvme0n1 00:34:47.503 12:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:48.070 Nvme0n1 00:34:48.070 12:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:48.070 12:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:49.973 12:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:49.973 12:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:50.232 12:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:50.232 12:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:51.609 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:51.609 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:51.609 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.609 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:51.609 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.609 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:51.609 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.609 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:51.868 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:51.869 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:51.869 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.869 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:51.869 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.869 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:51.869 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.869 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:52.128 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.128 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:52.128 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.128 12:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:52.387 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.387 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:52.387 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.387 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:52.645 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.645 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:52.645 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:52.904 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:52.904 12:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:54.281 12:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:54.281 12:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:54.281 12:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.281 12:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:54.282 12:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:54.282 12:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:54.282 12:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.282 12:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:54.282 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.540 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:54.540 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.540 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:54.540 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.540 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:54.540 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.540 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:54.799 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.799 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:54.799 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.799 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.058 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.058 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:55.058 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.058 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:55.318 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.318 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:55.318 12:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:55.318 12:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:55.578 12:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:56.956 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.215 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.215 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:57.215 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.215 12:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:57.475 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.475 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:57.475 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.475 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:57.734 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.734 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:57.734 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.734 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:57.993 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.993 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:57.993 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:57.993 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:58.252 12:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:59.187 12:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:59.187 12:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:59.187 12:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:59.187 12:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.446 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.446 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:59.446 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.446 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:59.705 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:59.705 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:59.705 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:59.705 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.964 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.964 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:59.964 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:59.964 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.224 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.224 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:00.224 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.224 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:00.224 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.224 12:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:00.224 12:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.224 12:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:00.483 12:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.483 12:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:00.483 12:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:00.742 12:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:01.000 12:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:01.936 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:01.936 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:01.936 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.936 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:02.196 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.196 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:02.196 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.196 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:02.196 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.196 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:02.196 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.196 12:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:02.455 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.455 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:02.455 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:02.455 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.714 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.714 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:02.714 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.714 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:02.973 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.973 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:02.973 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.973 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:02.973 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.973 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:02.973 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:03.232 12:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:03.490 12:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:04.427 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:04.427 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:04.427 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.427 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:04.686 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:04.686 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:04.686 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.686 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:04.945 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.945 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:04.945 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.945 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:04.945 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.945 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:04.945 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.945 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:05.204 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.204 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:05.204 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.204 12:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:05.463 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:05.463 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:05.463 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.463 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:05.722 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.722 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:05.722 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:05.722 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:05.981 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:06.239 12:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:07.177 12:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:07.177 12:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:07.177 12:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.177 12:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:07.435 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.435 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:07.435 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.436 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:07.694 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.694 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:07.694 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:07.694 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.953 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.953 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:07.953 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.953 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:07.953 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.953 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:07.953 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:07.953 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.212 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.212 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:08.212 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.212 12:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:08.469 12:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.469 12:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:08.469 12:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:08.727 12:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:08.986 12:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:09.955 12:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:09.955 12:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:09.955 12:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.955 12:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:10.214 12:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:10.214 12:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:10.214 12:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:10.214 12:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.214 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.214 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:10.214 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.214 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:10.473 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.473 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:10.474 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.474 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:10.732 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.732 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:10.732 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.732 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:10.992 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.992 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:10.992 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.992 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:10.992 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.992 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:10.992 12:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:11.251 12:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:11.509 12:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:12.446 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:12.446 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:12.446 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.446 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:12.705 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.706 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:12.706 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.706 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:12.965 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.965 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:12.965 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.965 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:13.224 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.224 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:13.224 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.224 12:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:13.483 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.483 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:13.483 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.483 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:13.742 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.742 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:13.742 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.742 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:13.742 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.742 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:13.742 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:14.001 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:14.259 12:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:15.196 12:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:15.196 12:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:15.196 12:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:15.196 12:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.455 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.455 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:15.455 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.455 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:15.714 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.714 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:15.714 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.714 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:15.973 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.973 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:15.973 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.973 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:15.973 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.973 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:15.973 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.973 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:16.232 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.232 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:16.232 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.232 12:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3850585 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3850585 ']' 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3850585 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3850585 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3850585' 00:35:16.491 killing process with pid 3850585 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3850585 00:35:16.491 12:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3850585 00:35:16.491 { 00:35:16.491 "results": [ 00:35:16.491 { 00:35:16.491 "job": "Nvme0n1", 00:35:16.491 "core_mask": "0x4", 00:35:16.491 "workload": "verify", 00:35:16.491 "status": "terminated", 00:35:16.491 "verify_range": { 00:35:16.491 "start": 0, 00:35:16.491 "length": 16384 00:35:16.491 }, 00:35:16.491 "queue_depth": 128, 00:35:16.491 "io_size": 4096, 00:35:16.491 "runtime": 28.479786, 00:35:16.491 "iops": 9292.134428257292, 00:35:16.491 "mibps": 36.297400110380046, 00:35:16.491 "io_failed": 0, 00:35:16.491 "io_timeout": 0, 00:35:16.491 "avg_latency_us": 13750.601428366297, 00:35:16.491 "min_latency_us": 269.1657142857143, 00:35:16.491 "max_latency_us": 3083812.083809524 00:35:16.491 } 00:35:16.491 ], 00:35:16.491 "core_count": 1 00:35:16.491 } 00:35:17.459 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3850585 00:35:17.459 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:17.459 [2024-12-10 12:37:52.673811] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:35:17.459 [2024-12-10 12:37:52.673903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850585 ] 00:35:17.459 [2024-12-10 12:37:52.781995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.459 [2024-12-10 12:37:52.891825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:17.459 Running I/O for 90 seconds... 00:35:17.459 10060.00 IOPS, 39.30 MiB/s [2024-12-10T11:38:24.285Z] 10186.00 IOPS, 39.79 MiB/s [2024-12-10T11:38:24.285Z] 10176.33 IOPS, 39.75 MiB/s [2024-12-10T11:38:24.285Z] 10178.50 IOPS, 39.76 MiB/s [2024-12-10T11:38:24.285Z] 10151.20 IOPS, 39.65 MiB/s [2024-12-10T11:38:24.285Z] 10114.83 IOPS, 39.51 MiB/s [2024-12-10T11:38:24.285Z] 10100.00 IOPS, 39.45 MiB/s [2024-12-10T11:38:24.285Z] 10073.75 IOPS, 39.35 MiB/s [2024-12-10T11:38:24.285Z] 10072.89 IOPS, 39.35 MiB/s [2024-12-10T11:38:24.285Z] 10051.80 IOPS, 39.26 MiB/s [2024-12-10T11:38:24.285Z] 10039.64 IOPS, 39.22 MiB/s [2024-12-10T11:38:24.285Z] 10039.50 IOPS, 39.22 MiB/s [2024-12-10T11:38:24.285Z] [2024-12-10 12:38:07.364793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.459 [2024-12-10 12:38:07.364854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.364890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.364903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.364923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.364934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.364954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.364967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.364986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.364998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.459 [2024-12-10 12:38:07.365928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.459 [2024-12-10 12:38:07.365945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.365957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.365979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.365992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.366978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.366988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.460 [2024-12-10 12:38:07.367772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.460 [2024-12-10 12:38:07.367797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.461 [2024-12-10 12:38:07.367810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.367830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.461 [2024-12-10 12:38:07.367841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.367861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.461 [2024-12-10 12:38:07.367873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.367895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.367907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.367928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.367941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.367961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.367974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.367991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.461 [2024-12-10 12:38:07.368556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.461 [2024-12-10 12:38:07.368582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.461 [2024-12-10 12:38:07.368609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.461 [2024-12-10 12:38:07.368830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.461 [2024-12-10 12:38:07.368847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.368857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.368874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.368883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.368900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.368910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.368927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.368936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.368953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.368963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.368979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.368991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.369512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.369529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.369539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.462 [2024-12-10 12:38:07.370109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.462 [2024-12-10 12:38:07.370463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.462 [2024-12-10 12:38:07.370474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.370976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.370985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.463 [2024-12-10 12:38:07.371388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.463 [2024-12-10 12:38:07.371398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.371629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.371639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.383066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.383094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.383121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.383147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.383179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.383208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.383935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.383975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.383995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.384007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.384036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.464 [2024-12-10 12:38:07.384063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.464 [2024-12-10 12:38:07.384615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.464 [2024-12-10 12:38:07.384625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.465 [2024-12-10 12:38:07.384737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.465 [2024-12-10 12:38:07.384763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.465 [2024-12-10 12:38:07.384791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.384974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.384984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.385709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.465 [2024-12-10 12:38:07.385719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.465 [2024-12-10 12:38:07.386271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.466 [2024-12-10 12:38:07.386330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.386975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.386992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.466 [2024-12-10 12:38:07.387406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.466 [2024-12-10 12:38:07.387422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.387975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.387987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.388015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.388042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.388724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.388758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.388812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.388841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.388870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.467 [2024-12-10 12:38:07.388898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.467 [2024-12-10 12:38:07.388929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.467 [2024-12-10 12:38:07.388956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.388973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.467 [2024-12-10 12:38:07.388984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.389002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.467 [2024-12-10 12:38:07.389014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.389032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.467 [2024-12-10 12:38:07.389042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.389060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.467 [2024-12-10 12:38:07.389070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.389087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.467 [2024-12-10 12:38:07.389097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.389115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.467 [2024-12-10 12:38:07.389125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.467 [2024-12-10 12:38:07.389142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.468 [2024-12-10 12:38:07.389576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.468 [2024-12-10 12:38:07.389603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.468 [2024-12-10 12:38:07.389630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.389973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.389983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.390000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.390010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.390027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.390039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.390055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.390065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.390082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.390092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.468 [2024-12-10 12:38:07.390111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.468 [2024-12-10 12:38:07.390121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.390503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.390512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.391074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.469 [2024-12-10 12:38:07.391146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.469 [2024-12-10 12:38:07.391782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.469 [2024-12-10 12:38:07.391792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.391809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.391820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.391836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.391846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.391863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.391874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.391891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.391901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.391918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.391928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.391945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.391958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.397983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.397993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.398987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.398999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.470 [2024-12-10 12:38:07.399019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.470 [2024-12-10 12:38:07.399030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.471 [2024-12-10 12:38:07.399063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.471 [2024-12-10 12:38:07.399093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.471 [2024-12-10 12:38:07.399124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.471 [2024-12-10 12:38:07.399151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.471 [2024-12-10 12:38:07.399831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.471 [2024-12-10 12:38:07.399859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.471 [2024-12-10 12:38:07.399886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.399988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.399999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.400016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.400027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.400044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.400054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.400071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.400081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.400098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.400109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.400126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.471 [2024-12-10 12:38:07.400137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.471 [2024-12-10 12:38:07.400160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.400748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.400758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.401336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.401369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-12-10 12:38:07.401403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.472 [2024-12-10 12:38:07.401437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-12-10 12:38:07.401468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-12-10 12:38:07.401497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-12-10 12:38:07.401527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-12-10 12:38:07.401556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-12-10 12:38:07.401585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-12-10 12:38:07.401613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.472 [2024-12-10 12:38:07.401630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-12-10 12:38:07.401641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.401970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.473 [2024-12-10 12:38:07.402770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-12-10 12:38:07.402782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.402800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.402811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.402829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.402839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.402856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.402867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.402884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.402895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.402912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.402923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.402940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.402950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.402968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.402978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.402995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.403974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.403993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.404003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-12-10 12:38:07.404032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.474 [2024-12-10 12:38:07.404613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.474 [2024-12-10 12:38:07.404624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.475 [2024-12-10 12:38:07.404734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.475 [2024-12-10 12:38:07.404762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.475 [2024-12-10 12:38:07.404789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.404979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.404990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.405631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.405642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.406195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.406213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.406235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.406265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.475 [2024-12-10 12:38:07.406275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.475 [2024-12-10 12:38:07.406294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.475 [2024-12-10 12:38:07.406305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.476 [2024-12-10 12:38:07.406340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.406985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.406996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.476 [2024-12-10 12:38:07.407303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.476 [2024-12-10 12:38:07.407313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.407968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.407980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-10 12:38:07.408899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.477 [2024-12-10 12:38:07.408928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.477 [2024-12-10 12:38:07.408955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.408973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.477 [2024-12-10 12:38:07.408983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.409000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.477 [2024-12-10 12:38:07.409010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.409027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.477 [2024-12-10 12:38:07.409038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.409055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.477 [2024-12-10 12:38:07.409065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.477 [2024-12-10 12:38:07.409083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.477 [2024-12-10 12:38:07.409093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-10 12:38:07.409585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-10 12:38:07.409613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-10 12:38:07.409642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.409975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.409985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.410013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.410041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.410069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.410097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.410125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.410153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.410187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.478 [2024-12-10 12:38:07.410217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.478 [2024-12-10 12:38:07.410235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.410263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.410294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.410323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.410350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.410378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.410406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.410434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.410463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.410473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.411062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.411095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.411124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.411152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.479 [2024-12-10 12:38:07.411227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.479 [2024-12-10 12:38:07.411963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.479 [2024-12-10 12:38:07.411982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.411992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.412883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.412894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.413559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.413592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.413621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.413651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.413680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.413710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.413744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-12-10 12:38:07.413774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.480 [2024-12-10 12:38:07.413792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-12-10 12:38:07.413803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.413821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-12-10 12:38:07.413832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.413850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.413860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.413878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.413889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.413906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.413917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.413933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.413944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.413962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.413972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.413988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-12-10 12:38:07.414512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-12-10 12:38:07.414538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-12-10 12:38:07.414566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.481 [2024-12-10 12:38:07.414754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-12-10 12:38:07.414764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.414781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.414792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.414809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.414820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.414838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.414849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.414866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.414876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.414894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.414904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.414922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.414933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.414950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.414960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.414978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.414988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.415971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.415983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.416011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.416039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.416068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-12-10 12:38:07.416133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.482 [2024-12-10 12:38:07.416468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.482 [2024-12-10 12:38:07.416479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.416977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.416987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.483 [2024-12-10 12:38:07.417608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-12-10 12:38:07.417618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.417636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.417647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.417663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.417674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.417691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.417701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.417720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.417731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.418697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.418980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.418998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-12-10 12:38:07.419361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.419390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.484 [2024-12-10 12:38:07.419407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-12-10 12:38:07.419419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-12-10 12:38:07.419447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.419978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.419988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.420949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-12-10 12:38:07.420977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.485 [2024-12-10 12:38:07.420995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-12-10 12:38:07.421012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.421977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.421988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.422005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.422015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.422032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.422042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.422059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.422069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.422088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.422098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.422117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.422127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.486 [2024-12-10 12:38:07.422145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.486 [2024-12-10 12:38:07.422155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.422575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.422585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.487 [2024-12-10 12:38:07.423595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.487 [2024-12-10 12:38:07.423948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-12-10 12:38:07.423958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.423975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.423986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.488 [2024-12-10 12:38:07.424276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.488 [2024-12-10 12:38:07.424305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.488 [2024-12-10 12:38:07.424334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.424980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.424990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.425008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.425018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.425036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.425047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.425065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.425076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.488 [2024-12-10 12:38:07.425640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.488 [2024-12-10 12:38:07.425658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.489 [2024-12-10 12:38:07.425690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.489 [2024-12-10 12:38:07.425718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.489 [2024-12-10 12:38:07.425750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.489 [2024-12-10 12:38:07.425779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.489 [2024-12-10 12:38:07.425807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.489 [2024-12-10 12:38:07.425836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.425866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.489 [2024-12-10 12:38:07.425899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.425929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.425959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.425977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.425989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.489 [2024-12-10 12:38:07.426682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.489 [2024-12-10 12:38:07.426699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.426982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.426993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.427437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.427448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.490 [2024-12-10 12:38:07.428473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.490 [2024-12-10 12:38:07.428491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.428978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.428988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.491 [2024-12-10 12:38:07.429153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.491 [2024-12-10 12:38:07.429186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.491 [2024-12-10 12:38:07.429216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.491 [2024-12-10 12:38:07.429583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.491 [2024-12-10 12:38:07.429601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.429926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.429937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.430752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.492 [2024-12-10 12:38:07.430786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.430818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.430851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.430880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.430908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.430935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.430963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.430980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.430990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.492 [2024-12-10 12:38:07.431301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.492 [2024-12-10 12:38:07.431312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.431975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.431992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.432969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.432991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.433001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.433020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.433031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.433049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.433060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.493 [2024-12-10 12:38:07.433079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.493 [2024-12-10 12:38:07.433090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.433382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.433974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.433985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.434002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.434013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.434029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.434043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.434060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.434070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.434089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.434100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.434117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.494 [2024-12-10 12:38:07.434128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.494 [2024-12-10 12:38:07.434146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.494 [2024-12-10 12:38:07.434157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.434817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.434828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.495 [2024-12-10 12:38:07.435409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.495 [2024-12-10 12:38:07.435446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.495 [2024-12-10 12:38:07.435479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.495 [2024-12-10 12:38:07.435511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.495 [2024-12-10 12:38:07.435543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.495 [2024-12-10 12:38:07.435577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.495 [2024-12-10 12:38:07.435608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.495 [2024-12-10 12:38:07.435629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.435985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.435999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.496 [2024-12-10 12:38:07.436912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.496 [2024-12-10 12:38:07.436934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:07.436945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:07.436966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:07.436977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:07.436998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:07.437009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:07.437029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:07.437041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:07.437062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:07.437073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:07.437093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:07.437104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:07.437126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:07.437136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:07.437291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:07.437305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.497 9723.31 IOPS, 37.98 MiB/s [2024-12-10T11:38:24.323Z] 9028.79 IOPS, 35.27 MiB/s [2024-12-10T11:38:24.323Z] 8426.87 IOPS, 32.92 MiB/s [2024-12-10T11:38:24.323Z] 8097.69 IOPS, 31.63 MiB/s [2024-12-10T11:38:24.323Z] 8212.94 IOPS, 32.08 MiB/s [2024-12-10T11:38:24.323Z] 8318.11 IOPS, 32.49 MiB/s [2024-12-10T11:38:24.323Z] 8515.26 IOPS, 33.26 MiB/s [2024-12-10T11:38:24.323Z] 8691.90 IOPS, 33.95 MiB/s [2024-12-10T11:38:24.323Z] 8813.19 IOPS, 34.43 MiB/s [2024-12-10T11:38:24.323Z] 8869.82 IOPS, 34.65 MiB/s [2024-12-10T11:38:24.323Z] 8909.83 IOPS, 34.80 MiB/s [2024-12-10T11:38:24.323Z] 8994.79 IOPS, 35.14 MiB/s [2024-12-10T11:38:24.323Z] 9124.12 IOPS, 35.64 MiB/s [2024-12-10T11:38:24.323Z] 9233.35 IOPS, 36.07 MiB/s [2024-12-10T11:38:24.323Z] [2024-12-10 12:38:20.919729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.919782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.919813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.919824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.919843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.919859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.919876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.919886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.919903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.919913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.919929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.919939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.919957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.919967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.919983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.919993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.497 [2024-12-10 12:38:20.920323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.497 [2024-12-10 12:38:20.920351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.920479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.920489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.921611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.921638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.921661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.921672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.921692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.921703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.921722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.921732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.497 [2024-12-10 12:38:20.921749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.497 [2024-12-10 12:38:20.921759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.921786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.921813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.921839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.921866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.921893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.921920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.921947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.921974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.921992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.922174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.922775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.922803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.922833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.922862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.922896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.922924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.922950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.922978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.922995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.923004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.923031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.923058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.923097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.923123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.923151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.923185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.923213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.498 [2024-12-10 12:38:20.923241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.498 [2024-12-10 12:38:20.923269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.498 [2024-12-10 12:38:20.923286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.923511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.923555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.923565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.499 [2024-12-10 12:38:20.924732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.499 [2024-12-10 12:38:20.924748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.499 [2024-12-10 12:38:20.924758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.924786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.924815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.924843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.924870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.924897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.924924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.924951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.924977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.924995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.500 [2024-12-10 12:38:20.925005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.925961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.500 [2024-12-10 12:38:20.925985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.500 [2024-12-10 12:38:20.926018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.500 [2024-12-10 12:38:20.926046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.926073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.500 [2024-12-10 12:38:20.926104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.500 [2024-12-10 12:38:20.926132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.500 [2024-12-10 12:38:20.926159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.926194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.500 [2024-12-10 12:38:20.926222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.500 [2024-12-10 12:38:20.926248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.500 [2024-12-10 12:38:20.926265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.926275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.926302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.926329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.926357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.926391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.926418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.926445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.926476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.926837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.926875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.926904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.926939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.926966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.926983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.926993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.501 [2024-12-10 12:38:20.927271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.927297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.927324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.927352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.501 [2024-12-10 12:38:20.927381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.501 [2024-12-10 12:38:20.927401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.927412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.927952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.927973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.927996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.928435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.928977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.928988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.929005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.929016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.929033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.929045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.929062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.929073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.929091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.502 [2024-12-10 12:38:20.929104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.930898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.502 [2024-12-10 12:38:20.930921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.502 [2024-12-10 12:38:20.930943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.930953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.930971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.930982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.930999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.931128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.931155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.931192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.931246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.931285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.931340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.503 [2024-12-10 12:38:20.931533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.931550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.931560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.932715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.932738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.932763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.932775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.932791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.932802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.932819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.932830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.932848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.932858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.932875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.932886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.503 [2024-12-10 12:38:20.932903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.503 [2024-12-10 12:38:20.932914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.932931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.932941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.932958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.932968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.932985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.932996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.504 [2024-12-10 12:38:20.933502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.504 [2024-12-10 12:38:20.933548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.504 [2024-12-10 12:38:20.933558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.933575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.933584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.933600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.933611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.933628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.933638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.935898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.935921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.935943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.935955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.935973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.935984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.936241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.936269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.505 [2024-12-10 12:38:20.936501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.936528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.936555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.936583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.505 [2024-12-10 12:38:20.936612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.505 [2024-12-10 12:38:20.936628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.936639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.936656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.936667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.936685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.936695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.936713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.936723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.936740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.936753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.936770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.936781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.936799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.936809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.938108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.938433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.506 [2024-12-10 12:38:20.938518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.938919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.938952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.938980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.938998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.939008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.939026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.939037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.939054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.939064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.506 [2024-12-10 12:38:20.939081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.506 [2024-12-10 12:38:20.939093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.939151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.939327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.939468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.939497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.939524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.939553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.939580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.939597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.939608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.940729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.940789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.940819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.940850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.940879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.940907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.940939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.507 [2024-12-10 12:38:20.940969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.940986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.940997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.941014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.507 [2024-12-10 12:38:20.941025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.507 [2024-12-10 12:38:20.941041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.941763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.941781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.941792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.943955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.508 [2024-12-10 12:38:20.943980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.944002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.508 [2024-12-10 12:38:20.944013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.508 [2024-12-10 12:38:20.944031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.509 [2024-12-10 12:38:20.944756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.944773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.944782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.945619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.945641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.945665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.509 [2024-12-10 12:38:20.945680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.509 [2024-12-10 12:38:20.945698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.945708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.945736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.945764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.945792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.945820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.945847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.945874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.945904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.945938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.945967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.945983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.945994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.946021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.946051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.946079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.946107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.946137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.946164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.946672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.946706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.946734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.510 [2024-12-10 12:38:20.946762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.946790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.946846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.946878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.510 [2024-12-10 12:38:20.946895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.510 [2024-12-10 12:38:20.946906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.946923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.946934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.946951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.946961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.946979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.946989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.947018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.947046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.947224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.947252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.947337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.947811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.947843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.947975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.947986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.948017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.948046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.948073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.948101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.948129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.948157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.948193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.511 [2024-12-10 12:38:20.948221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.511 [2024-12-10 12:38:20.948250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.511 [2024-12-10 12:38:20.948901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.948921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.948945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.948955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.948973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.948985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.949013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.949049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.949077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.949107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.949134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.949163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.949197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.949226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.949255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.949282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.949309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.949337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.949370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.949390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.949401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.950578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.950611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.950639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.950666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.950695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.950724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.950753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.950780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.950808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.950836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.950864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.950895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.512 [2024-12-10 12:38:20.950922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.512 [2024-12-10 12:38:20.950950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.512 [2024-12-10 12:38:20.950966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.950977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.950994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.951004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.951034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.951062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.951090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.951117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.951145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.951180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.951208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.951238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.951255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.951266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.953978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.953989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.954016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.954048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.954076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.954104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.954132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.954160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.954195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.954223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.954250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.954277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.954304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.513 [2024-12-10 12:38:20.954331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.513 [2024-12-10 12:38:20.954349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.513 [2024-12-10 12:38:20.954359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.954819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.954864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.954875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.955939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.955961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.955988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.956000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.956029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.956057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.956086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.956115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.956143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.514 [2024-12-10 12:38:20.956187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.956217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.956246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.956275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.956304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.956333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.956360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.514 [2024-12-10 12:38:20.956389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.514 [2024-12-10 12:38:20.956789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.956812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.956834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.956845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.956864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.956875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.956893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.956905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.956923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.956938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.956956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.956968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.956985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.956996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.957027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.957145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.957268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.957333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.957363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.957391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.957449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.957467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.957478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.958465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.515 [2024-12-10 12:38:20.958489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.958511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.958523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.958541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.958553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.958571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.958583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.958602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.958613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.515 [2024-12-10 12:38:20.958630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.515 [2024-12-10 12:38:20.958642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.958677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.958705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.958743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.958773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.958803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.958831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.958859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.958888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.958917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.958946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.958975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.958993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.959004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.959035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.959063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.959091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.959119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.959148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.959190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.959219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.959247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.959277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.959306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.959334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.959364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.959383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.959397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.961114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.961146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.961187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.516 [2024-12-10 12:38:20.961215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.961243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.961271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.961300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.961328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.516 [2024-12-10 12:38:20.961357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.516 [2024-12-10 12:38:20.961374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.961442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.961559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.961588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.961616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.961645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.961702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.961730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.961835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.961846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.962777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.962800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.962822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.962834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.962851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.962862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.962879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.962890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.962907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.962917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.962934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.962945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.962962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.962975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.962992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.963003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.963032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.963062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.963094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.963123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.963152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.963536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.963569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.963599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.963628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.517 [2024-12-10 12:38:20.963658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.517 [2024-12-10 12:38:20.963688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.517 [2024-12-10 12:38:20.963706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.963717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.963746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.963774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.963806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.963835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.963864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.963893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.963923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.963951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.963980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.963998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.964009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.964033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.964044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.964062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.964073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.964091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.964102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.964119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.964131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.964148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.964161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.964187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.964199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.965033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.965066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.965329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.965362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.965391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.965410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.965420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.966396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.966426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.966449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.518 [2024-12-10 12:38:20.966461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.966479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.966489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.966506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.518 [2024-12-10 12:38:20.966517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.518 [2024-12-10 12:38:20.966535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.966658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.966747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.966775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.966830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.966858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.966885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.966914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.966986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.966997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.967014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.967027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.967045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.967055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.967072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.967082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.967099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.967110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.967128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.967139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.967839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.967860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.967890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.967901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.967919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.967929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.969072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.969106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.969135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.969165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.969202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.969231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.969259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.969287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.969315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.519 [2024-12-10 12:38:20.969343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.519 [2024-12-10 12:38:20.969361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.519 [2024-12-10 12:38:20.969371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.969857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.969931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.969941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.970959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.970981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.971013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.971042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.520 [2024-12-10 12:38:20.971070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.971100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.971129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.971156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.971190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.971219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.520 [2024-12-10 12:38:20.971236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.520 [2024-12-10 12:38:20.971247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.971926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.971946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.971968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.971980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.971998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.972009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.972037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.972066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.972124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.972275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.972336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.972422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.972509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.972527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.972538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.973340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.973391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.973420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.973452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.973480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.973508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.973535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.973563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.521 [2024-12-10 12:38:20.973592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.973620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.973647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.521 [2024-12-10 12:38:20.973674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.521 [2024-12-10 12:38:20.973691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.522 [2024-12-10 12:38:20.973701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.973718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.522 [2024-12-10 12:38:20.973729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.973745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.522 [2024-12-10 12:38:20.973756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.973773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.522 [2024-12-10 12:38:20.973786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.522 [2024-12-10 12:38:20.974674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.522 [2024-12-10 12:38:20.974714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.522 [2024-12-10 12:38:20.974743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.522 [2024-12-10 12:38:20.974771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.522 [2024-12-10 12:38:20.974799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.522 [2024-12-10 12:38:20.974826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.522 [2024-12-10 12:38:20.974854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.522 [2024-12-10 12:38:20.974882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.522 [2024-12-10 12:38:20.974899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.522 [2024-12-10 12:38:20.974910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.522 9260.78 IOPS, 36.17 MiB/s [2024-12-10T11:38:24.348Z] 9280.29 IOPS, 36.25 MiB/s [2024-12-10T11:38:24.348Z] Received shutdown signal, test time was about 28.480476 seconds 00:35:17.522 00:35:17.522 Latency(us) 00:35:17.522 [2024-12-10T11:38:24.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.522 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:17.522 Verification LBA range: start 0x0 length 0x4000 00:35:17.522 Nvme0n1 : 28.48 9292.13 36.30 0.00 0.00 13750.60 269.17 3083812.08 00:35:17.522 [2024-12-10T11:38:24.348Z] =================================================================================================================== 00:35:17.522 [2024-12-10T11:38:24.348Z] Total : 9292.13 36.30 0.00 0.00 13750.60 269.17 3083812.08 00:35:17.522 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:17.816 rmmod nvme_tcp 00:35:17.816 rmmod nvme_fabrics 00:35:17.816 rmmod nvme_keyring 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3850327 ']' 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3850327 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3850327 ']' 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3850327 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3850327 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3850327' 00:35:17.816 killing process with pid 3850327 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3850327 00:35:17.816 12:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3850327 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.222 12:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.127 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.127 00:35:21.127 real 0m42.987s 00:35:21.127 user 1m56.438s 00:35:21.127 sys 0m11.069s 00:35:21.127 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.127 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:21.127 ************************************ 00:35:21.127 END TEST nvmf_host_multipath_status 00:35:21.127 ************************************ 00:35:21.127 12:38:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:21.127 12:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:21.127 12:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.127 12:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.387 ************************************ 00:35:21.387 START TEST nvmf_discovery_remove_ifc 00:35:21.387 ************************************ 00:35:21.387 12:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:21.387 * Looking for test storage... 00:35:21.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:21.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.387 --rc genhtml_branch_coverage=1 00:35:21.387 --rc genhtml_function_coverage=1 00:35:21.387 --rc genhtml_legend=1 00:35:21.387 --rc geninfo_all_blocks=1 00:35:21.387 --rc geninfo_unexecuted_blocks=1 00:35:21.387 00:35:21.387 ' 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:21.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.387 --rc genhtml_branch_coverage=1 00:35:21.387 --rc genhtml_function_coverage=1 00:35:21.387 --rc genhtml_legend=1 00:35:21.387 --rc geninfo_all_blocks=1 00:35:21.387 --rc geninfo_unexecuted_blocks=1 00:35:21.387 00:35:21.387 ' 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:21.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.387 --rc genhtml_branch_coverage=1 00:35:21.387 --rc genhtml_function_coverage=1 00:35:21.387 --rc genhtml_legend=1 00:35:21.387 --rc geninfo_all_blocks=1 00:35:21.387 --rc geninfo_unexecuted_blocks=1 00:35:21.387 00:35:21.387 ' 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:21.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.387 --rc genhtml_branch_coverage=1 00:35:21.387 --rc genhtml_function_coverage=1 00:35:21.387 --rc genhtml_legend=1 00:35:21.387 --rc geninfo_all_blocks=1 00:35:21.387 --rc geninfo_unexecuted_blocks=1 00:35:21.387 00:35:21.387 ' 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.387 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:21.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.388 12:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:26.661 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:26.661 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:26.661 Found net devices under 0000:af:00.0: cvl_0_0 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.661 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:26.661 Found net devices under 0000:af:00.1: cvl_0_1 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:26.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:35:26.662 00:35:26.662 --- 10.0.0.2 ping statistics --- 00:35:26.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.662 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:35:26.662 00:35:26.662 --- 10.0.0.1 ping statistics --- 00:35:26.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.662 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3859382 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3859382 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3859382 ']' 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.662 12:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.921 [2024-12-10 12:38:33.535156] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:35:26.921 [2024-12-10 12:38:33.535267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.921 [2024-12-10 12:38:33.650760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.180 [2024-12-10 12:38:33.760286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.180 [2024-12-10 12:38:33.760324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.180 [2024-12-10 12:38:33.760334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.180 [2024-12-10 12:38:33.760344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.180 [2024-12-10 12:38:33.760351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.180 [2024-12-10 12:38:33.761510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.747 [2024-12-10 12:38:34.383895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.747 [2024-12-10 12:38:34.392070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:27.747 null0 00:35:27.747 [2024-12-10 12:38:34.424047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3859618 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3859618 /tmp/host.sock 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3859618 ']' 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:27.747 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.747 12:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.747 [2024-12-10 12:38:34.520453] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:35:27.747 [2024-12-10 12:38:34.520540] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859618 ] 00:35:28.005 [2024-12-10 12:38:34.632724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.005 [2024-12-10 12:38:34.738440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.572 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.139 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.139 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:29.139 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.139 12:38:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.074 [2024-12-10 12:38:36.683636] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:30.074 [2024-12-10 12:38:36.683670] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:30.074 [2024-12-10 12:38:36.683696] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:30.074 [2024-12-10 12:38:36.811095] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:30.074 [2024-12-10 12:38:36.872902] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:30.074 [2024-12-10 12:38:36.873998] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000326200:1 started. 00:35:30.074 [2024-12-10 12:38:36.875638] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:30.074 [2024-12-10 12:38:36.875688] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:30.074 [2024-12-10 12:38:36.875737] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:30.074 [2024-12-10 12:38:36.875759] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:30.074 [2024-12-10 12:38:36.875788] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.074 [2024-12-10 12:38:36.883259] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000326200 was disconnected and freed. delete nvme_qpair. 00:35:30.074 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.333 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:30.333 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:30.333 12:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.333 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.591 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:30.591 12:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:31.525 12:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:32.459 12:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:33.832 12:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:34.766 12:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:35.701 [2024-12-10 12:38:42.317213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:35.701 [2024-12-10 12:38:42.317279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.701 [2024-12-10 12:38:42.317295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.701 [2024-12-10 12:38:42.317308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.701 [2024-12-10 12:38:42.317318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.701 [2024-12-10 12:38:42.317329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.701 [2024-12-10 12:38:42.317338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.701 [2024-12-10 12:38:42.317353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.701 [2024-12-10 12:38:42.317362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.701 [2024-12-10 12:38:42.317372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.701 [2024-12-10 12:38:42.317381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.701 [2024-12-10 12:38:42.317391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:35:35.701 [2024-12-10 12:38:42.327231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:35:35.701 [2024-12-10 12:38:42.337266] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:35.701 [2024-12-10 12:38:42.337291] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:35.701 [2024-12-10 12:38:42.337299] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:35.701 [2024-12-10 12:38:42.337307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:35.701 [2024-12-10 12:38:42.337347] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:35.701 12:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:35.701 12:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:35.701 12:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:35.701 12:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:35.701 12:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.701 12:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.701 12:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:36.668 [2024-12-10 12:38:43.368204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:36.668 [2024-12-10 12:38:43.368261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325d00 with addr=10.0.0.2, port=4420 00:35:36.668 [2024-12-10 12:38:43.368286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:35:36.668 [2024-12-10 12:38:43.368324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:35:36.668 [2024-12-10 12:38:43.368962] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:36.668 [2024-12-10 12:38:43.369010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:36.668 [2024-12-10 12:38:43.369032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:36.668 [2024-12-10 12:38:43.369049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:36.668 [2024-12-10 12:38:43.369066] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:36.668 [2024-12-10 12:38:43.369082] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:36.668 [2024-12-10 12:38:43.369093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:36.668 [2024-12-10 12:38:43.369115] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:36.668 [2024-12-10 12:38:43.369126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:36.668 12:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.668 12:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:36.668 12:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:37.602 [2024-12-10 12:38:44.371631] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:37.602 [2024-12-10 12:38:44.371661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:37.602 [2024-12-10 12:38:44.371678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:37.602 [2024-12-10 12:38:44.371687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:37.602 [2024-12-10 12:38:44.371696] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:37.602 [2024-12-10 12:38:44.371710] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:37.602 [2024-12-10 12:38:44.371717] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:37.602 [2024-12-10 12:38:44.371724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:37.602 [2024-12-10 12:38:44.371755] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:37.602 [2024-12-10 12:38:44.371782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.602 [2024-12-10 12:38:44.371797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.602 [2024-12-10 12:38:44.371811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.602 [2024-12-10 12:38:44.371821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.602 [2024-12-10 12:38:44.371831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.602 [2024-12-10 12:38:44.371842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.602 [2024-12-10 12:38:44.371852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.602 [2024-12-10 12:38:44.371862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.602 [2024-12-10 12:38:44.371872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.602 [2024-12-10 12:38:44.371882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.602 [2024-12-10 12:38:44.371890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:37.602 [2024-12-10 12:38:44.371931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325800 (9): Bad file descriptor 00:35:37.602 [2024-12-10 12:38:44.372926] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:37.602 [2024-12-10 12:38:44.372949] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:37.602 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:37.602 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:37.602 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:37.602 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.602 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:37.602 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:37.602 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:37.602 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.860 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:37.860 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.860 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.860 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:37.860 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:37.861 12:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:38.794 12:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:39.728 [2024-12-10 12:38:46.389367] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:39.728 [2024-12-10 12:38:46.389391] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:39.728 [2024-12-10 12:38:46.389420] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:39.728 [2024-12-10 12:38:46.516831] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:39.986 12:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:39.986 [2024-12-10 12:38:46.739097] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:39.986 [2024-12-10 12:38:46.740124] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x615000326e80:1 started. 00:35:39.986 [2024-12-10 12:38:46.741750] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:39.986 [2024-12-10 12:38:46.741796] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:39.986 [2024-12-10 12:38:46.741858] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:39.986 [2024-12-10 12:38:46.741876] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:39.986 [2024-12-10 12:38:46.741887] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:39.986 [2024-12-10 12:38:46.790283] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x615000326e80 was disconnected and freed. delete nvme_qpair. 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3859618 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3859618 ']' 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3859618 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.920 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859618 00:35:41.178 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:41.178 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:41.178 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859618' 00:35:41.178 killing process with pid 3859618 00:35:41.178 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3859618 00:35:41.178 12:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3859618 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:42.113 rmmod nvme_tcp 00:35:42.113 rmmod nvme_fabrics 00:35:42.113 rmmod nvme_keyring 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3859382 ']' 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3859382 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3859382 ']' 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3859382 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859382 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859382' 00:35:42.113 killing process with pid 3859382 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3859382 00:35:42.113 12:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3859382 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.048 12:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.579 12:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:45.579 00:35:45.579 real 0m23.974s 00:35:45.579 user 0m31.293s 00:35:45.579 sys 0m5.530s 00:35:45.579 12:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.579 12:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.579 ************************************ 00:35:45.579 END TEST nvmf_discovery_remove_ifc 00:35:45.579 ************************************ 00:35:45.579 12:38:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:45.579 12:38:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:45.579 12:38:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.579 12:38:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.579 ************************************ 00:35:45.579 START TEST nvmf_identify_kernel_target 00:35:45.579 ************************************ 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:45.579 * Looking for test storage... 00:35:45.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:45.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.579 --rc genhtml_branch_coverage=1 00:35:45.579 --rc genhtml_function_coverage=1 00:35:45.579 --rc genhtml_legend=1 00:35:45.579 --rc geninfo_all_blocks=1 00:35:45.579 --rc geninfo_unexecuted_blocks=1 00:35:45.579 00:35:45.579 ' 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:45.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.579 --rc genhtml_branch_coverage=1 00:35:45.579 --rc genhtml_function_coverage=1 00:35:45.579 --rc genhtml_legend=1 00:35:45.579 --rc geninfo_all_blocks=1 00:35:45.579 --rc geninfo_unexecuted_blocks=1 00:35:45.579 00:35:45.579 ' 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:45.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.579 --rc genhtml_branch_coverage=1 00:35:45.579 --rc genhtml_function_coverage=1 00:35:45.579 --rc genhtml_legend=1 00:35:45.579 --rc geninfo_all_blocks=1 00:35:45.579 --rc geninfo_unexecuted_blocks=1 00:35:45.579 00:35:45.579 ' 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:45.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.579 --rc genhtml_branch_coverage=1 00:35:45.579 --rc genhtml_function_coverage=1 00:35:45.579 --rc genhtml_legend=1 00:35:45.579 --rc geninfo_all_blocks=1 00:35:45.579 --rc geninfo_unexecuted_blocks=1 00:35:45.579 00:35:45.579 ' 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:45.579 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:45.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:45.580 12:38:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:50.844 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:50.844 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:50.844 Found net devices under 0000:af:00.0: cvl_0_0 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:50.844 Found net devices under 0000:af:00.1: cvl_0_1 00:35:50.844 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:50.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:50.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:35:50.845 00:35:50.845 --- 10.0.0.2 ping statistics --- 00:35:50.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.845 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:50.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:50.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:35:50.845 00:35:50.845 --- 10.0.0.1 ping statistics --- 00:35:50.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.845 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:50.845 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:53.374 Waiting for block devices as requested 00:35:53.374 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:53.374 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:53.374 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:53.374 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:53.632 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:53.632 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:53.632 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:53.632 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:53.891 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:53.891 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:53.891 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:53.891 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:54.149 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:54.149 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:54.149 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:54.407 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:54.407 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:54.407 No valid GPT data, bailing 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:54.407 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:54.666 00:35:54.666 Discovery Log Number of Records 2, Generation counter 2 00:35:54.666 =====Discovery Log Entry 0====== 00:35:54.666 trtype: tcp 00:35:54.666 adrfam: ipv4 00:35:54.666 subtype: current discovery subsystem 00:35:54.666 treq: not specified, sq flow control disable supported 00:35:54.666 portid: 1 00:35:54.666 trsvcid: 4420 00:35:54.666 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:54.666 traddr: 10.0.0.1 00:35:54.666 eflags: none 00:35:54.666 sectype: none 00:35:54.666 =====Discovery Log Entry 1====== 00:35:54.666 trtype: tcp 00:35:54.666 adrfam: ipv4 00:35:54.666 subtype: nvme subsystem 00:35:54.666 treq: not specified, sq flow control disable supported 00:35:54.666 portid: 1 00:35:54.666 trsvcid: 4420 00:35:54.666 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:54.666 traddr: 10.0.0.1 00:35:54.666 eflags: none 00:35:54.666 sectype: none 00:35:54.666 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:54.666 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:54.666 ===================================================== 00:35:54.666 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:54.666 ===================================================== 00:35:54.666 Controller Capabilities/Features 00:35:54.666 ================================ 00:35:54.666 Vendor ID: 0000 00:35:54.666 Subsystem Vendor ID: 0000 00:35:54.666 Serial Number: 14e655ba7fa8e73c9553 00:35:54.666 Model Number: Linux 00:35:54.666 Firmware Version: 6.8.9-20 00:35:54.666 Recommended Arb Burst: 0 00:35:54.666 IEEE OUI Identifier: 00 00 00 00:35:54.666 Multi-path I/O 00:35:54.666 May have multiple subsystem ports: No 00:35:54.666 May have multiple controllers: No 00:35:54.666 Associated with SR-IOV VF: No 00:35:54.666 Max Data Transfer Size: Unlimited 00:35:54.666 Max Number of Namespaces: 0 00:35:54.666 Max Number of I/O Queues: 1024 00:35:54.666 NVMe Specification Version (VS): 1.3 00:35:54.666 NVMe Specification Version (Identify): 1.3 00:35:54.666 Maximum Queue Entries: 1024 00:35:54.666 Contiguous Queues Required: No 00:35:54.666 Arbitration Mechanisms Supported 00:35:54.666 Weighted Round Robin: Not Supported 00:35:54.666 Vendor Specific: Not Supported 00:35:54.666 Reset Timeout: 7500 ms 00:35:54.666 Doorbell Stride: 4 bytes 00:35:54.666 NVM Subsystem Reset: Not Supported 00:35:54.666 Command Sets Supported 00:35:54.666 NVM Command Set: Supported 00:35:54.666 Boot Partition: Not Supported 00:35:54.666 Memory Page Size Minimum: 4096 bytes 00:35:54.666 Memory Page Size Maximum: 4096 bytes 00:35:54.666 Persistent Memory Region: Not Supported 00:35:54.666 Optional Asynchronous Events Supported 00:35:54.666 Namespace Attribute Notices: Not Supported 00:35:54.666 Firmware Activation Notices: Not Supported 00:35:54.666 ANA Change Notices: Not Supported 00:35:54.666 PLE Aggregate Log Change Notices: Not Supported 00:35:54.666 LBA Status Info Alert Notices: Not Supported 00:35:54.666 EGE Aggregate Log Change Notices: Not Supported 00:35:54.666 Normal NVM Subsystem Shutdown event: Not Supported 00:35:54.666 Zone Descriptor Change Notices: Not Supported 00:35:54.666 Discovery Log Change Notices: Supported 00:35:54.666 Controller Attributes 00:35:54.666 128-bit Host Identifier: Not Supported 00:35:54.666 Non-Operational Permissive Mode: Not Supported 00:35:54.666 NVM Sets: Not Supported 00:35:54.666 Read Recovery Levels: Not Supported 00:35:54.666 Endurance Groups: Not Supported 00:35:54.666 Predictable Latency Mode: Not Supported 00:35:54.666 Traffic Based Keep ALive: Not Supported 00:35:54.666 Namespace Granularity: Not Supported 00:35:54.666 SQ Associations: Not Supported 00:35:54.666 UUID List: Not Supported 00:35:54.666 Multi-Domain Subsystem: Not Supported 00:35:54.666 Fixed Capacity Management: Not Supported 00:35:54.666 Variable Capacity Management: Not Supported 00:35:54.666 Delete Endurance Group: Not Supported 00:35:54.666 Delete NVM Set: Not Supported 00:35:54.666 Extended LBA Formats Supported: Not Supported 00:35:54.666 Flexible Data Placement Supported: Not Supported 00:35:54.666 00:35:54.666 Controller Memory Buffer Support 00:35:54.666 ================================ 00:35:54.666 Supported: No 00:35:54.666 00:35:54.666 Persistent Memory Region Support 00:35:54.666 ================================ 00:35:54.666 Supported: No 00:35:54.666 00:35:54.666 Admin Command Set Attributes 00:35:54.666 ============================ 00:35:54.666 Security Send/Receive: Not Supported 00:35:54.666 Format NVM: Not Supported 00:35:54.666 Firmware Activate/Download: Not Supported 00:35:54.666 Namespace Management: Not Supported 00:35:54.666 Device Self-Test: Not Supported 00:35:54.666 Directives: Not Supported 00:35:54.666 NVMe-MI: Not Supported 00:35:54.666 Virtualization Management: Not Supported 00:35:54.666 Doorbell Buffer Config: Not Supported 00:35:54.666 Get LBA Status Capability: Not Supported 00:35:54.666 Command & Feature Lockdown Capability: Not Supported 00:35:54.666 Abort Command Limit: 1 00:35:54.666 Async Event Request Limit: 1 00:35:54.666 Number of Firmware Slots: N/A 00:35:54.666 Firmware Slot 1 Read-Only: N/A 00:35:54.666 Firmware Activation Without Reset: N/A 00:35:54.666 Multiple Update Detection Support: N/A 00:35:54.666 Firmware Update Granularity: No Information Provided 00:35:54.666 Per-Namespace SMART Log: No 00:35:54.666 Asymmetric Namespace Access Log Page: Not Supported 00:35:54.666 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:54.666 Command Effects Log Page: Not Supported 00:35:54.666 Get Log Page Extended Data: Supported 00:35:54.666 Telemetry Log Pages: Not Supported 00:35:54.666 Persistent Event Log Pages: Not Supported 00:35:54.666 Supported Log Pages Log Page: May Support 00:35:54.666 Commands Supported & Effects Log Page: Not Supported 00:35:54.666 Feature Identifiers & Effects Log Page:May Support 00:35:54.666 NVMe-MI Commands & Effects Log Page: May Support 00:35:54.666 Data Area 4 for Telemetry Log: Not Supported 00:35:54.666 Error Log Page Entries Supported: 1 00:35:54.667 Keep Alive: Not Supported 00:35:54.667 00:35:54.667 NVM Command Set Attributes 00:35:54.667 ========================== 00:35:54.667 Submission Queue Entry Size 00:35:54.667 Max: 1 00:35:54.667 Min: 1 00:35:54.667 Completion Queue Entry Size 00:35:54.667 Max: 1 00:35:54.667 Min: 1 00:35:54.667 Number of Namespaces: 0 00:35:54.667 Compare Command: Not Supported 00:35:54.667 Write Uncorrectable Command: Not Supported 00:35:54.667 Dataset Management Command: Not Supported 00:35:54.667 Write Zeroes Command: Not Supported 00:35:54.667 Set Features Save Field: Not Supported 00:35:54.667 Reservations: Not Supported 00:35:54.667 Timestamp: Not Supported 00:35:54.667 Copy: Not Supported 00:35:54.667 Volatile Write Cache: Not Present 00:35:54.667 Atomic Write Unit (Normal): 1 00:35:54.667 Atomic Write Unit (PFail): 1 00:35:54.667 Atomic Compare & Write Unit: 1 00:35:54.667 Fused Compare & Write: Not Supported 00:35:54.667 Scatter-Gather List 00:35:54.667 SGL Command Set: Supported 00:35:54.667 SGL Keyed: Not Supported 00:35:54.667 SGL Bit Bucket Descriptor: Not Supported 00:35:54.667 SGL Metadata Pointer: Not Supported 00:35:54.667 Oversized SGL: Not Supported 00:35:54.667 SGL Metadata Address: Not Supported 00:35:54.667 SGL Offset: Supported 00:35:54.667 Transport SGL Data Block: Not Supported 00:35:54.667 Replay Protected Memory Block: Not Supported 00:35:54.667 00:35:54.667 Firmware Slot Information 00:35:54.667 ========================= 00:35:54.667 Active slot: 0 00:35:54.667 00:35:54.667 00:35:54.667 Error Log 00:35:54.667 ========= 00:35:54.667 00:35:54.667 Active Namespaces 00:35:54.667 ================= 00:35:54.667 Discovery Log Page 00:35:54.667 ================== 00:35:54.667 Generation Counter: 2 00:35:54.667 Number of Records: 2 00:35:54.667 Record Format: 0 00:35:54.667 00:35:54.667 Discovery Log Entry 0 00:35:54.667 ---------------------- 00:35:54.667 Transport Type: 3 (TCP) 00:35:54.667 Address Family: 1 (IPv4) 00:35:54.667 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:54.667 Entry Flags: 00:35:54.667 Duplicate Returned Information: 0 00:35:54.667 Explicit Persistent Connection Support for Discovery: 0 00:35:54.667 Transport Requirements: 00:35:54.667 Secure Channel: Not Specified 00:35:54.667 Port ID: 1 (0x0001) 00:35:54.667 Controller ID: 65535 (0xffff) 00:35:54.667 Admin Max SQ Size: 32 00:35:54.667 Transport Service Identifier: 4420 00:35:54.667 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:54.667 Transport Address: 10.0.0.1 00:35:54.667 Discovery Log Entry 1 00:35:54.667 ---------------------- 00:35:54.667 Transport Type: 3 (TCP) 00:35:54.667 Address Family: 1 (IPv4) 00:35:54.667 Subsystem Type: 2 (NVM Subsystem) 00:35:54.667 Entry Flags: 00:35:54.667 Duplicate Returned Information: 0 00:35:54.667 Explicit Persistent Connection Support for Discovery: 0 00:35:54.667 Transport Requirements: 00:35:54.667 Secure Channel: Not Specified 00:35:54.667 Port ID: 1 (0x0001) 00:35:54.667 Controller ID: 65535 (0xffff) 00:35:54.667 Admin Max SQ Size: 32 00:35:54.667 Transport Service Identifier: 4420 00:35:54.667 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:54.667 Transport Address: 10.0.0.1 00:35:54.667 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:54.926 get_feature(0x01) failed 00:35:54.926 get_feature(0x02) failed 00:35:54.926 get_feature(0x04) failed 00:35:54.926 ===================================================== 00:35:54.926 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:54.926 ===================================================== 00:35:54.926 Controller Capabilities/Features 00:35:54.926 ================================ 00:35:54.926 Vendor ID: 0000 00:35:54.926 Subsystem Vendor ID: 0000 00:35:54.926 Serial Number: 32a30ffa15c58cf23c8c 00:35:54.926 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:54.926 Firmware Version: 6.8.9-20 00:35:54.926 Recommended Arb Burst: 6 00:35:54.926 IEEE OUI Identifier: 00 00 00 00:35:54.926 Multi-path I/O 00:35:54.926 May have multiple subsystem ports: Yes 00:35:54.926 May have multiple controllers: Yes 00:35:54.926 Associated with SR-IOV VF: No 00:35:54.926 Max Data Transfer Size: Unlimited 00:35:54.926 Max Number of Namespaces: 1024 00:35:54.926 Max Number of I/O Queues: 128 00:35:54.926 NVMe Specification Version (VS): 1.3 00:35:54.926 NVMe Specification Version (Identify): 1.3 00:35:54.926 Maximum Queue Entries: 1024 00:35:54.926 Contiguous Queues Required: No 00:35:54.926 Arbitration Mechanisms Supported 00:35:54.926 Weighted Round Robin: Not Supported 00:35:54.926 Vendor Specific: Not Supported 00:35:54.926 Reset Timeout: 7500 ms 00:35:54.926 Doorbell Stride: 4 bytes 00:35:54.926 NVM Subsystem Reset: Not Supported 00:35:54.926 Command Sets Supported 00:35:54.926 NVM Command Set: Supported 00:35:54.926 Boot Partition: Not Supported 00:35:54.926 Memory Page Size Minimum: 4096 bytes 00:35:54.926 Memory Page Size Maximum: 4096 bytes 00:35:54.926 Persistent Memory Region: Not Supported 00:35:54.926 Optional Asynchronous Events Supported 00:35:54.926 Namespace Attribute Notices: Supported 00:35:54.926 Firmware Activation Notices: Not Supported 00:35:54.926 ANA Change Notices: Supported 00:35:54.926 PLE Aggregate Log Change Notices: Not Supported 00:35:54.926 LBA Status Info Alert Notices: Not Supported 00:35:54.926 EGE Aggregate Log Change Notices: Not Supported 00:35:54.926 Normal NVM Subsystem Shutdown event: Not Supported 00:35:54.926 Zone Descriptor Change Notices: Not Supported 00:35:54.926 Discovery Log Change Notices: Not Supported 00:35:54.926 Controller Attributes 00:35:54.926 128-bit Host Identifier: Supported 00:35:54.926 Non-Operational Permissive Mode: Not Supported 00:35:54.926 NVM Sets: Not Supported 00:35:54.926 Read Recovery Levels: Not Supported 00:35:54.926 Endurance Groups: Not Supported 00:35:54.926 Predictable Latency Mode: Not Supported 00:35:54.926 Traffic Based Keep ALive: Supported 00:35:54.926 Namespace Granularity: Not Supported 00:35:54.926 SQ Associations: Not Supported 00:35:54.926 UUID List: Not Supported 00:35:54.926 Multi-Domain Subsystem: Not Supported 00:35:54.926 Fixed Capacity Management: Not Supported 00:35:54.926 Variable Capacity Management: Not Supported 00:35:54.926 Delete Endurance Group: Not Supported 00:35:54.926 Delete NVM Set: Not Supported 00:35:54.926 Extended LBA Formats Supported: Not Supported 00:35:54.926 Flexible Data Placement Supported: Not Supported 00:35:54.926 00:35:54.926 Controller Memory Buffer Support 00:35:54.926 ================================ 00:35:54.926 Supported: No 00:35:54.926 00:35:54.926 Persistent Memory Region Support 00:35:54.926 ================================ 00:35:54.927 Supported: No 00:35:54.927 00:35:54.927 Admin Command Set Attributes 00:35:54.927 ============================ 00:35:54.927 Security Send/Receive: Not Supported 00:35:54.927 Format NVM: Not Supported 00:35:54.927 Firmware Activate/Download: Not Supported 00:35:54.927 Namespace Management: Not Supported 00:35:54.927 Device Self-Test: Not Supported 00:35:54.927 Directives: Not Supported 00:35:54.927 NVMe-MI: Not Supported 00:35:54.927 Virtualization Management: Not Supported 00:35:54.927 Doorbell Buffer Config: Not Supported 00:35:54.927 Get LBA Status Capability: Not Supported 00:35:54.927 Command & Feature Lockdown Capability: Not Supported 00:35:54.927 Abort Command Limit: 4 00:35:54.927 Async Event Request Limit: 4 00:35:54.927 Number of Firmware Slots: N/A 00:35:54.927 Firmware Slot 1 Read-Only: N/A 00:35:54.927 Firmware Activation Without Reset: N/A 00:35:54.927 Multiple Update Detection Support: N/A 00:35:54.927 Firmware Update Granularity: No Information Provided 00:35:54.927 Per-Namespace SMART Log: Yes 00:35:54.927 Asymmetric Namespace Access Log Page: Supported 00:35:54.927 ANA Transition Time : 10 sec 00:35:54.927 00:35:54.927 Asymmetric Namespace Access Capabilities 00:35:54.927 ANA Optimized State : Supported 00:35:54.927 ANA Non-Optimized State : Supported 00:35:54.927 ANA Inaccessible State : Supported 00:35:54.927 ANA Persistent Loss State : Supported 00:35:54.927 ANA Change State : Supported 00:35:54.927 ANAGRPID is not changed : No 00:35:54.927 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:54.927 00:35:54.927 ANA Group Identifier Maximum : 128 00:35:54.927 Number of ANA Group Identifiers : 128 00:35:54.927 Max Number of Allowed Namespaces : 1024 00:35:54.927 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:54.927 Command Effects Log Page: Supported 00:35:54.927 Get Log Page Extended Data: Supported 00:35:54.927 Telemetry Log Pages: Not Supported 00:35:54.927 Persistent Event Log Pages: Not Supported 00:35:54.927 Supported Log Pages Log Page: May Support 00:35:54.927 Commands Supported & Effects Log Page: Not Supported 00:35:54.927 Feature Identifiers & Effects Log Page:May Support 00:35:54.927 NVMe-MI Commands & Effects Log Page: May Support 00:35:54.927 Data Area 4 for Telemetry Log: Not Supported 00:35:54.927 Error Log Page Entries Supported: 128 00:35:54.927 Keep Alive: Supported 00:35:54.927 Keep Alive Granularity: 1000 ms 00:35:54.927 00:35:54.927 NVM Command Set Attributes 00:35:54.927 ========================== 00:35:54.927 Submission Queue Entry Size 00:35:54.927 Max: 64 00:35:54.927 Min: 64 00:35:54.927 Completion Queue Entry Size 00:35:54.927 Max: 16 00:35:54.927 Min: 16 00:35:54.927 Number of Namespaces: 1024 00:35:54.927 Compare Command: Not Supported 00:35:54.927 Write Uncorrectable Command: Not Supported 00:35:54.927 Dataset Management Command: Supported 00:35:54.927 Write Zeroes Command: Supported 00:35:54.927 Set Features Save Field: Not Supported 00:35:54.927 Reservations: Not Supported 00:35:54.927 Timestamp: Not Supported 00:35:54.927 Copy: Not Supported 00:35:54.927 Volatile Write Cache: Present 00:35:54.927 Atomic Write Unit (Normal): 1 00:35:54.927 Atomic Write Unit (PFail): 1 00:35:54.927 Atomic Compare & Write Unit: 1 00:35:54.927 Fused Compare & Write: Not Supported 00:35:54.927 Scatter-Gather List 00:35:54.927 SGL Command Set: Supported 00:35:54.927 SGL Keyed: Not Supported 00:35:54.927 SGL Bit Bucket Descriptor: Not Supported 00:35:54.927 SGL Metadata Pointer: Not Supported 00:35:54.927 Oversized SGL: Not Supported 00:35:54.927 SGL Metadata Address: Not Supported 00:35:54.927 SGL Offset: Supported 00:35:54.927 Transport SGL Data Block: Not Supported 00:35:54.927 Replay Protected Memory Block: Not Supported 00:35:54.927 00:35:54.927 Firmware Slot Information 00:35:54.927 ========================= 00:35:54.927 Active slot: 0 00:35:54.927 00:35:54.927 Asymmetric Namespace Access 00:35:54.927 =========================== 00:35:54.927 Change Count : 0 00:35:54.927 Number of ANA Group Descriptors : 1 00:35:54.927 ANA Group Descriptor : 0 00:35:54.927 ANA Group ID : 1 00:35:54.927 Number of NSID Values : 1 00:35:54.927 Change Count : 0 00:35:54.927 ANA State : 1 00:35:54.927 Namespace Identifier : 1 00:35:54.927 00:35:54.927 Commands Supported and Effects 00:35:54.927 ============================== 00:35:54.927 Admin Commands 00:35:54.927 -------------- 00:35:54.927 Get Log Page (02h): Supported 00:35:54.927 Identify (06h): Supported 00:35:54.927 Abort (08h): Supported 00:35:54.927 Set Features (09h): Supported 00:35:54.927 Get Features (0Ah): Supported 00:35:54.927 Asynchronous Event Request (0Ch): Supported 00:35:54.927 Keep Alive (18h): Supported 00:35:54.927 I/O Commands 00:35:54.927 ------------ 00:35:54.927 Flush (00h): Supported 00:35:54.927 Write (01h): Supported LBA-Change 00:35:54.927 Read (02h): Supported 00:35:54.927 Write Zeroes (08h): Supported LBA-Change 00:35:54.927 Dataset Management (09h): Supported 00:35:54.927 00:35:54.927 Error Log 00:35:54.927 ========= 00:35:54.927 Entry: 0 00:35:54.927 Error Count: 0x3 00:35:54.927 Submission Queue Id: 0x0 00:35:54.927 Command Id: 0x5 00:35:54.927 Phase Bit: 0 00:35:54.927 Status Code: 0x2 00:35:54.927 Status Code Type: 0x0 00:35:54.927 Do Not Retry: 1 00:35:54.927 Error Location: 0x28 00:35:54.927 LBA: 0x0 00:35:54.927 Namespace: 0x0 00:35:54.927 Vendor Log Page: 0x0 00:35:54.927 ----------- 00:35:54.927 Entry: 1 00:35:54.927 Error Count: 0x2 00:35:54.927 Submission Queue Id: 0x0 00:35:54.927 Command Id: 0x5 00:35:54.927 Phase Bit: 0 00:35:54.927 Status Code: 0x2 00:35:54.927 Status Code Type: 0x0 00:35:54.927 Do Not Retry: 1 00:35:54.927 Error Location: 0x28 00:35:54.927 LBA: 0x0 00:35:54.927 Namespace: 0x0 00:35:54.927 Vendor Log Page: 0x0 00:35:54.927 ----------- 00:35:54.927 Entry: 2 00:35:54.927 Error Count: 0x1 00:35:54.927 Submission Queue Id: 0x0 00:35:54.927 Command Id: 0x4 00:35:54.927 Phase Bit: 0 00:35:54.927 Status Code: 0x2 00:35:54.927 Status Code Type: 0x0 00:35:54.927 Do Not Retry: 1 00:35:54.927 Error Location: 0x28 00:35:54.927 LBA: 0x0 00:35:54.927 Namespace: 0x0 00:35:54.927 Vendor Log Page: 0x0 00:35:54.927 00:35:54.927 Number of Queues 00:35:54.927 ================ 00:35:54.927 Number of I/O Submission Queues: 128 00:35:54.927 Number of I/O Completion Queues: 128 00:35:54.927 00:35:54.927 ZNS Specific Controller Data 00:35:54.927 ============================ 00:35:54.927 Zone Append Size Limit: 0 00:35:54.927 00:35:54.927 00:35:54.927 Active Namespaces 00:35:54.927 ================= 00:35:54.927 get_feature(0x05) failed 00:35:54.927 Namespace ID:1 00:35:54.927 Command Set Identifier: NVM (00h) 00:35:54.927 Deallocate: Supported 00:35:54.927 Deallocated/Unwritten Error: Not Supported 00:35:54.927 Deallocated Read Value: Unknown 00:35:54.927 Deallocate in Write Zeroes: Not Supported 00:35:54.927 Deallocated Guard Field: 0xFFFF 00:35:54.927 Flush: Supported 00:35:54.927 Reservation: Not Supported 00:35:54.927 Namespace Sharing Capabilities: Multiple Controllers 00:35:54.927 Size (in LBAs): 1953525168 (931GiB) 00:35:54.927 Capacity (in LBAs): 1953525168 (931GiB) 00:35:54.927 Utilization (in LBAs): 1953525168 (931GiB) 00:35:54.927 UUID: ebc2f973-ef0d-4ee3-8be4-3f732355970b 00:35:54.927 Thin Provisioning: Not Supported 00:35:54.927 Per-NS Atomic Units: Yes 00:35:54.927 Atomic Boundary Size (Normal): 0 00:35:54.927 Atomic Boundary Size (PFail): 0 00:35:54.927 Atomic Boundary Offset: 0 00:35:54.927 NGUID/EUI64 Never Reused: No 00:35:54.927 ANA group ID: 1 00:35:54.927 Namespace Write Protected: No 00:35:54.927 Number of LBA Formats: 1 00:35:54.927 Current LBA Format: LBA Format #00 00:35:54.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:54.927 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:54.927 rmmod nvme_tcp 00:35:54.927 rmmod nvme_fabrics 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:54.927 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:54.928 12:39:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:57.458 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:59.986 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:59.986 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:00.921 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:00.921 00:36:00.921 real 0m15.533s 00:36:00.921 user 0m3.922s 00:36:00.921 sys 0m7.963s 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:00.921 ************************************ 00:36:00.921 END TEST nvmf_identify_kernel_target 00:36:00.921 ************************************ 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.921 ************************************ 00:36:00.921 START TEST nvmf_auth_host 00:36:00.921 ************************************ 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:00.921 * Looking for test storage... 00:36:00.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:36:00.921 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:01.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.180 --rc genhtml_branch_coverage=1 00:36:01.180 --rc genhtml_function_coverage=1 00:36:01.180 --rc genhtml_legend=1 00:36:01.180 --rc geninfo_all_blocks=1 00:36:01.180 --rc geninfo_unexecuted_blocks=1 00:36:01.180 00:36:01.180 ' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:01.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.180 --rc genhtml_branch_coverage=1 00:36:01.180 --rc genhtml_function_coverage=1 00:36:01.180 --rc genhtml_legend=1 00:36:01.180 --rc geninfo_all_blocks=1 00:36:01.180 --rc geninfo_unexecuted_blocks=1 00:36:01.180 00:36:01.180 ' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:01.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.180 --rc genhtml_branch_coverage=1 00:36:01.180 --rc genhtml_function_coverage=1 00:36:01.180 --rc genhtml_legend=1 00:36:01.180 --rc geninfo_all_blocks=1 00:36:01.180 --rc geninfo_unexecuted_blocks=1 00:36:01.180 00:36:01.180 ' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:01.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.180 --rc genhtml_branch_coverage=1 00:36:01.180 --rc genhtml_function_coverage=1 00:36:01.180 --rc genhtml_legend=1 00:36:01.180 --rc geninfo_all_blocks=1 00:36:01.180 --rc geninfo_unexecuted_blocks=1 00:36:01.180 00:36:01.180 ' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:01.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:01.180 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:01.181 12:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:06.446 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:06.446 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:06.446 Found net devices under 0000:af:00.0: cvl_0_0 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:06.446 Found net devices under 0000:af:00.1: cvl_0_1 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:06.446 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:06.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:36:06.705 00:36:06.705 --- 10.0.0.2 ping statistics --- 00:36:06.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.705 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:36:06.705 00:36:06.705 --- 10.0.0.1 ping statistics --- 00:36:06.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.705 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3872041 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3872041 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3872041 ']' 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.705 12:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=29dff05bdcc207bc1dea2716dac5d6fa 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZrP 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 29dff05bdcc207bc1dea2716dac5d6fa 0 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 29dff05bdcc207bc1dea2716dac5d6fa 0 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=29dff05bdcc207bc1dea2716dac5d6fa 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZrP 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZrP 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZrP 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=db65fc8a7fee98c22ed0498fb22b19cdf0906b20aefad617f9d688f7e51bdb69 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2bX 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key db65fc8a7fee98c22ed0498fb22b19cdf0906b20aefad617f9d688f7e51bdb69 3 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 db65fc8a7fee98c22ed0498fb22b19cdf0906b20aefad617f9d688f7e51bdb69 3 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=db65fc8a7fee98c22ed0498fb22b19cdf0906b20aefad617f9d688f7e51bdb69 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2bX 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2bX 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2bX 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83be21563e4f57f7d55a2dcad4550a0edd62b0dc6d725862 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xTz 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83be21563e4f57f7d55a2dcad4550a0edd62b0dc6d725862 0 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83be21563e4f57f7d55a2dcad4550a0edd62b0dc6d725862 0 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83be21563e4f57f7d55a2dcad4550a0edd62b0dc6d725862 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xTz 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xTz 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xTz 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:07.642 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d2ef0958a2e33c3a7dcb8eb8f3eef089489e7ab677983248 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Yuc 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d2ef0958a2e33c3a7dcb8eb8f3eef089489e7ab677983248 2 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d2ef0958a2e33c3a7dcb8eb8f3eef089489e7ab677983248 2 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d2ef0958a2e33c3a7dcb8eb8f3eef089489e7ab677983248 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Yuc 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Yuc 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Yuc 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=87ba2ac150543108c10694943f6fb055 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GA6 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 87ba2ac150543108c10694943f6fb055 1 00:36:07.901 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 87ba2ac150543108c10694943f6fb055 1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=87ba2ac150543108c10694943f6fb055 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GA6 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GA6 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GA6 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5d5111b4a2d992555c70f1aa04cad311 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BFH 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5d5111b4a2d992555c70f1aa04cad311 1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5d5111b4a2d992555c70f1aa04cad311 1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5d5111b4a2d992555c70f1aa04cad311 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BFH 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BFH 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BFH 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6357aed02901a612e271ce3aaa1b0e01ab6e56985bed76db 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.t1A 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6357aed02901a612e271ce3aaa1b0e01ab6e56985bed76db 2 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6357aed02901a612e271ce3aaa1b0e01ab6e56985bed76db 2 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6357aed02901a612e271ce3aaa1b0e01ab6e56985bed76db 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.t1A 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.t1A 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.t1A 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0237f5dfcfaa86fa816cec16412725e4 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vi1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0237f5dfcfaa86fa816cec16412725e4 0 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0237f5dfcfaa86fa816cec16412725e4 0 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0237f5dfcfaa86fa816cec16412725e4 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:07.902 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vi1 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vi1 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.vi1 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cff68812c5181a7d87855deb4ffd49a389ca5fba60d8ce0cae35e11b4259bf00 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4Ak 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cff68812c5181a7d87855deb4ffd49a389ca5fba60d8ce0cae35e11b4259bf00 3 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cff68812c5181a7d87855deb4ffd49a389ca5fba60d8ce0cae35e11b4259bf00 3 00:36:08.161 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cff68812c5181a7d87855deb4ffd49a389ca5fba60d8ce0cae35e11b4259bf00 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4Ak 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4Ak 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.4Ak 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3872041 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3872041 ']' 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.162 12:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZrP 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2bX ]] 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2bX 00:36:08.426 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xTz 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Yuc ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yuc 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GA6 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BFH ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BFH 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.t1A 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.vi1 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.vi1 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.4Ak 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:08.427 12:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:10.958 Waiting for block devices as requested 00:36:10.958 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:10.958 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:11.294 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:11.294 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:11.294 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:11.294 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:11.294 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:11.565 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:11.565 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:11.565 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:11.565 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:11.823 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:11.823 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:11.823 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:11.823 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.081 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.081 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:12.646 No valid GPT data, bailing 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:12.646 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:12.903 00:36:12.903 Discovery Log Number of Records 2, Generation counter 2 00:36:12.903 =====Discovery Log Entry 0====== 00:36:12.903 trtype: tcp 00:36:12.903 adrfam: ipv4 00:36:12.903 subtype: current discovery subsystem 00:36:12.903 treq: not specified, sq flow control disable supported 00:36:12.903 portid: 1 00:36:12.903 trsvcid: 4420 00:36:12.903 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:12.903 traddr: 10.0.0.1 00:36:12.903 eflags: none 00:36:12.903 sectype: none 00:36:12.903 =====Discovery Log Entry 1====== 00:36:12.903 trtype: tcp 00:36:12.903 adrfam: ipv4 00:36:12.903 subtype: nvme subsystem 00:36:12.903 treq: not specified, sq flow control disable supported 00:36:12.903 portid: 1 00:36:12.903 trsvcid: 4420 00:36:12.903 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:12.903 traddr: 10.0.0.1 00:36:12.903 eflags: none 00:36:12.903 sectype: none 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:12.903 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.904 nvme0n1 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.904 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.162 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.163 nvme0n1 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.163 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.421 12:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.421 nvme0n1 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:13.421 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.422 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.684 nvme0n1 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.684 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.946 nvme0n1 00:36:13.946 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.946 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.946 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.946 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.946 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.946 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.946 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.946 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.947 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.204 nvme0n1 00:36:14.204 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.204 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.204 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.204 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.204 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.204 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.205 12:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.464 nvme0n1 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.464 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.722 nvme0n1 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.722 nvme0n1 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.722 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.981 nvme0n1 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.981 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.240 12:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.240 nvme0n1 00:36:15.240 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.240 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.240 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.240 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.240 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.240 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.499 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.757 nvme0n1 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:15.757 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.758 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.016 nvme0n1 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.016 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.275 nvme0n1 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.275 12:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.275 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.276 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.276 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.276 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.276 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.276 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.276 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:16.276 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.276 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.534 nvme0n1 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.534 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.792 nvme0n1 00:36:16.792 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.792 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.792 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.792 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.792 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.792 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.050 12:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.308 nvme0n1 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.308 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.309 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.309 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.309 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:17.309 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.309 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.875 nvme0n1 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.875 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.133 nvme0n1 00:36:18.133 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.133 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.133 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.133 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.133 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.133 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:18.396 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.397 12:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.397 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.656 nvme0n1 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.656 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.657 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.657 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.657 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.657 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.657 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:18.657 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.657 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.222 nvme0n1 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.222 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.223 12:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.789 nvme0n1 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:19.789 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.790 12:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.356 nvme0n1 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.356 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.614 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.180 nvme0n1 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.180 12:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.746 nvme0n1 00:36:21.746 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.746 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.746 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.746 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.746 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.746 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.746 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.747 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.313 nvme0n1 00:36:22.313 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.313 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.313 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.313 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.313 12:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.313 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:22.314 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:22.314 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:22.314 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.314 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.314 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.572 nvme0n1 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.572 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.831 nvme0n1 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.831 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.090 nvme0n1 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.090 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.091 nvme0n1 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.091 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.349 12:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.349 nvme0n1 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.349 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.608 nvme0n1 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.608 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.609 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.867 nvme0n1 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.867 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.868 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.126 nvme0n1 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.126 12:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.387 nvme0n1 00:36:24.387 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.387 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.387 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.387 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.387 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.387 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.387 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.387 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.388 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.645 nvme0n1 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.645 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.903 nvme0n1 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.903 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.161 nvme0n1 00:36:25.161 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.161 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.161 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.161 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.161 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.419 12:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.419 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.419 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.419 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.419 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.419 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.419 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.419 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.420 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.678 nvme0n1 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.678 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.936 nvme0n1 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.937 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.195 nvme0n1 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.195 12:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.195 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.454 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.712 nvme0n1 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.712 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.279 nvme0n1 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.279 12:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.538 nvme0n1 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.538 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.105 nvme0n1 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.105 12:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.364 nvme0n1 00:36:28.364 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.364 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.364 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.364 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.364 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.364 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.622 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.188 nvme0n1 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.188 12:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.755 nvme0n1 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.755 12:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.321 nvme0n1 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.321 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.579 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.145 nvme0n1 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.145 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.146 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.146 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.146 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.146 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:31.146 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.146 12:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.712 nvme0n1 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.712 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.971 nvme0n1 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.971 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.229 nvme0n1 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.229 12:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.229 nvme0n1 00:36:32.230 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.230 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.489 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.490 nvme0n1 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.490 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.748 nvme0n1 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:32.748 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.749 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.007 nvme0n1 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:33.007 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.008 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.266 nvme0n1 00:36:33.266 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.266 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.266 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.266 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.266 12:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.266 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.267 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.267 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.267 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.267 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.267 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.525 nvme0n1 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.525 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.784 nvme0n1 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.784 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.043 nvme0n1 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.043 12:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.302 nvme0n1 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:34.302 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.560 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.819 nvme0n1 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.819 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.077 nvme0n1 00:36:35.077 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.077 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.077 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.078 12:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.336 nvme0n1 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.336 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.594 nvme0n1 00:36:35.594 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.594 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.594 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.594 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.594 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.594 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.852 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.110 nvme0n1 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.110 12:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.676 nvme0n1 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.676 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.934 nvme0n1 00:36:36.934 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.934 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.934 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.934 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.934 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.934 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.193 12:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.451 nvme0n1 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.451 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.452 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.018 nvme0n1 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjlkZmYwNWJkY2MyMDdiYzFkZWEyNzE2ZGFjNWQ2ZmH8YgrI: 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGI2NWZjOGE3ZmVlOThjMjJlZDA0OThmYjIyYjE5Y2RmMDkwNmIyMGFlZmFkNjE3ZjlkNjg4ZjdlNTFiZGI2OSnZorA=: 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.018 12:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.585 nvme0n1 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.585 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.152 nvme0n1 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.152 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.410 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.410 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.410 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:39.410 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.410 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.410 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.410 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:39.410 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.411 12:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:39.411 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.411 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.979 nvme0n1 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM1N2FlZDAyOTAxYTYxMmUyNzFjZTNhYWExYjBlMDFhYjZlNTY5ODViZWQ3NmRifeK+0A==: 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDIzN2Y1ZGZjZmFhODZmYTgxNmNlYzE2NDEyNzI1ZTRSWKe+: 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.979 12:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.547 nvme0n1 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ZmNjg4MTJjNTE4MWE3ZDg3ODU1ZGViNGZmZDQ5YTM4OWNhNWZiYTYwZDhjZTBjYWUzNWUxMWI0MjU5YmYwMDyk5T0=: 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.547 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.114 nvme0n1 00:36:41.114 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.114 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.114 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.114 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.114 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.114 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:41.373 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.374 12:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.374 request: 00:36:41.374 { 00:36:41.374 "name": "nvme0", 00:36:41.374 "trtype": "tcp", 00:36:41.374 "traddr": "10.0.0.1", 00:36:41.374 "adrfam": "ipv4", 00:36:41.374 "trsvcid": "4420", 00:36:41.374 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:41.374 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:41.374 "prchk_reftag": false, 00:36:41.374 "prchk_guard": false, 00:36:41.374 "hdgst": false, 00:36:41.374 "ddgst": false, 00:36:41.374 "allow_unrecognized_csi": false, 00:36:41.374 "method": "bdev_nvme_attach_controller", 00:36:41.374 "req_id": 1 00:36:41.374 } 00:36:41.374 Got JSON-RPC error response 00:36:41.374 response: 00:36:41.374 { 00:36:41.374 "code": -5, 00:36:41.374 "message": "Input/output error" 00:36:41.374 } 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.374 request: 00:36:41.374 { 00:36:41.374 "name": "nvme0", 00:36:41.374 "trtype": "tcp", 00:36:41.374 "traddr": "10.0.0.1", 00:36:41.374 "adrfam": "ipv4", 00:36:41.374 "trsvcid": "4420", 00:36:41.374 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:41.374 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:41.374 "prchk_reftag": false, 00:36:41.374 "prchk_guard": false, 00:36:41.374 "hdgst": false, 00:36:41.374 "ddgst": false, 00:36:41.374 "dhchap_key": "key2", 00:36:41.374 "allow_unrecognized_csi": false, 00:36:41.374 "method": "bdev_nvme_attach_controller", 00:36:41.374 "req_id": 1 00:36:41.374 } 00:36:41.374 Got JSON-RPC error response 00:36:41.374 response: 00:36:41.374 { 00:36:41.374 "code": -5, 00:36:41.374 "message": "Input/output error" 00:36:41.374 } 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.374 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.634 request: 00:36:41.634 { 00:36:41.634 "name": "nvme0", 00:36:41.634 "trtype": "tcp", 00:36:41.634 "traddr": "10.0.0.1", 00:36:41.634 "adrfam": "ipv4", 00:36:41.634 "trsvcid": "4420", 00:36:41.634 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:41.634 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:41.634 "prchk_reftag": false, 00:36:41.634 "prchk_guard": false, 00:36:41.634 "hdgst": false, 00:36:41.634 "ddgst": false, 00:36:41.634 "dhchap_key": "key1", 00:36:41.634 "dhchap_ctrlr_key": "ckey2", 00:36:41.634 "allow_unrecognized_csi": false, 00:36:41.634 "method": "bdev_nvme_attach_controller", 00:36:41.634 "req_id": 1 00:36:41.634 } 00:36:41.634 Got JSON-RPC error response 00:36:41.634 response: 00:36:41.634 { 00:36:41.634 "code": -5, 00:36:41.634 "message": "Input/output error" 00:36:41.634 } 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.634 nvme0n1 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.634 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.893 request: 00:36:41.893 { 00:36:41.893 "name": "nvme0", 00:36:41.893 "dhchap_key": "key1", 00:36:41.893 "dhchap_ctrlr_key": "ckey2", 00:36:41.893 "method": "bdev_nvme_set_keys", 00:36:41.893 "req_id": 1 00:36:41.893 } 00:36:41.893 Got JSON-RPC error response 00:36:41.893 response: 00:36:41.893 { 00:36:41.893 "code": -13, 00:36:41.893 "message": "Permission denied" 00:36:41.893 } 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:41.893 12:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:42.829 12:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.829 12:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:42.829 12:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.829 12:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.088 12:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.088 12:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:43.088 12:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiZTIxNTYzZTRmNTdmN2Q1NWEyZGNhZDQ1NTBhMGVkZDYyYjBkYzZkNzI1ODYyk7zoVw==: 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: ]] 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJlZjA5NThhMmUzM2MzYTdkY2I4ZWI4ZjNlZWYwODk0ODllN2FiNjc3OTgzMjQ4UR05Wg==: 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.025 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.284 nvme0n1 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODdiYTJhYzE1MDU0MzEwOGMxMDY5NDk0M2Y2ZmIwNTUK/jDY: 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: ]] 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ1MTExYjRhMmQ5OTI1NTVjNzBmMWFhMDRjYWQzMTGnLcVl: 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.284 request: 00:36:44.284 { 00:36:44.284 "name": "nvme0", 00:36:44.284 "dhchap_key": "key2", 00:36:44.284 "dhchap_ctrlr_key": "ckey1", 00:36:44.284 "method": "bdev_nvme_set_keys", 00:36:44.284 "req_id": 1 00:36:44.284 } 00:36:44.284 Got JSON-RPC error response 00:36:44.284 response: 00:36:44.284 { 00:36:44.284 "code": -13, 00:36:44.284 "message": "Permission denied" 00:36:44.284 } 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.284 12:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.285 12:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:44.285 12:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:45.221 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.221 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:45.221 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.221 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.221 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:45.480 rmmod nvme_tcp 00:36:45.480 rmmod nvme_fabrics 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3872041 ']' 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3872041 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3872041 ']' 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3872041 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3872041 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3872041' 00:36:45.480 killing process with pid 3872041 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3872041 00:36:45.480 12:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3872041 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:46.420 12:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:48.327 12:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:50.907 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:50.907 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:51.844 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:51.844 12:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZrP /tmp/spdk.key-null.xTz /tmp/spdk.key-sha256.GA6 /tmp/spdk.key-sha384.t1A /tmp/spdk.key-sha512.4Ak /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:51.844 12:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:54.376 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:54.376 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:54.376 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:54.376 00:36:54.376 real 0m53.423s 00:36:54.376 user 0m48.198s 00:36:54.376 sys 0m11.809s 00:36:54.376 12:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:54.376 12:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.376 ************************************ 00:36:54.376 END TEST nvmf_auth_host 00:36:54.376 ************************************ 00:36:54.376 12:40:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:54.376 12:40:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:54.376 12:40:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:54.376 12:40:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:54.376 12:40:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.376 ************************************ 00:36:54.376 START TEST nvmf_digest 00:36:54.376 ************************************ 00:36:54.376 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:54.377 * Looking for test storage... 00:36:54.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:54.377 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:54.377 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:36:54.377 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:54.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.636 --rc genhtml_branch_coverage=1 00:36:54.636 --rc genhtml_function_coverage=1 00:36:54.636 --rc genhtml_legend=1 00:36:54.636 --rc geninfo_all_blocks=1 00:36:54.636 --rc geninfo_unexecuted_blocks=1 00:36:54.636 00:36:54.636 ' 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:54.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.636 --rc genhtml_branch_coverage=1 00:36:54.636 --rc genhtml_function_coverage=1 00:36:54.636 --rc genhtml_legend=1 00:36:54.636 --rc geninfo_all_blocks=1 00:36:54.636 --rc geninfo_unexecuted_blocks=1 00:36:54.636 00:36:54.636 ' 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:54.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.636 --rc genhtml_branch_coverage=1 00:36:54.636 --rc genhtml_function_coverage=1 00:36:54.636 --rc genhtml_legend=1 00:36:54.636 --rc geninfo_all_blocks=1 00:36:54.636 --rc geninfo_unexecuted_blocks=1 00:36:54.636 00:36:54.636 ' 00:36:54.636 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:54.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.637 --rc genhtml_branch_coverage=1 00:36:54.637 --rc genhtml_function_coverage=1 00:36:54.637 --rc genhtml_legend=1 00:36:54.637 --rc geninfo_all_blocks=1 00:36:54.637 --rc geninfo_unexecuted_blocks=1 00:36:54.637 00:36:54.637 ' 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:54.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:54.637 12:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:59.906 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:59.906 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:59.906 Found net devices under 0000:af:00.0: cvl_0_0 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:59.906 Found net devices under 0000:af:00.1: cvl_0_1 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:59.906 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:59.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:59.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:36:59.907 00:36:59.907 --- 10.0.0.2 ping statistics --- 00:36:59.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.907 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:59.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:59.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:36:59.907 00:36:59.907 --- 10.0.0.1 ping statistics --- 00:36:59.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.907 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:59.907 ************************************ 00:36:59.907 START TEST nvmf_digest_clean 00:36:59.907 ************************************ 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3885618 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3885618 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3885618 ']' 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.907 12:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:59.907 [2024-12-10 12:40:06.693983] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:36:59.907 [2024-12-10 12:40:06.694071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.167 [2024-12-10 12:40:06.809424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.167 [2024-12-10 12:40:06.911253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.167 [2024-12-10 12:40:06.911299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.167 [2024-12-10 12:40:06.911309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.167 [2024-12-10 12:40:06.911319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.167 [2024-12-10 12:40:06.911327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.167 [2024-12-10 12:40:06.912594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.734 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:01.301 null0 00:37:01.301 [2024-12-10 12:40:07.884942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.301 [2024-12-10 12:40:07.909185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3885854 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3885854 /var/tmp/bperf.sock 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3885854 ']' 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:01.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:01.302 12:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:01.302 [2024-12-10 12:40:07.986294] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:01.302 [2024-12-10 12:40:07.986375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885854 ] 00:37:01.302 [2024-12-10 12:40:08.098381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.560 [2024-12-10 12:40:08.210175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.128 12:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:02.128 12:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:02.128 12:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:02.128 12:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:02.128 12:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:02.696 12:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:02.696 12:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:02.955 nvme0n1 00:37:02.955 12:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:02.955 12:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:03.214 Running I/O for 2 seconds... 00:37:05.086 21410.00 IOPS, 83.63 MiB/s [2024-12-10T11:40:11.912Z] 21839.00 IOPS, 85.31 MiB/s 00:37:05.086 Latency(us) 00:37:05.086 [2024-12-10T11:40:11.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.086 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:05.086 nvme0n1 : 2.01 21849.56 85.35 0.00 0.00 5852.38 2855.50 13419.28 00:37:05.086 [2024-12-10T11:40:11.912Z] =================================================================================================================== 00:37:05.086 [2024-12-10T11:40:11.912Z] Total : 21849.56 85.35 0.00 0.00 5852.38 2855.50 13419.28 00:37:05.086 { 00:37:05.086 "results": [ 00:37:05.086 { 00:37:05.086 "job": "nvme0n1", 00:37:05.086 "core_mask": "0x2", 00:37:05.086 "workload": "randread", 00:37:05.086 "status": "finished", 00:37:05.086 "queue_depth": 128, 00:37:05.086 "io_size": 4096, 00:37:05.086 "runtime": 2.007134, 00:37:05.086 "iops": 21849.562610169527, 00:37:05.086 "mibps": 85.34985394597471, 00:37:05.086 "io_failed": 0, 00:37:05.086 "io_timeout": 0, 00:37:05.086 "avg_latency_us": 5852.378569636953, 00:37:05.086 "min_latency_us": 2855.497142857143, 00:37:05.086 "max_latency_us": 13419.27619047619 00:37:05.086 } 00:37:05.086 ], 00:37:05.086 "core_count": 1 00:37:05.086 } 00:37:05.086 12:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:05.086 12:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:05.086 12:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:05.086 12:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:05.086 | select(.opcode=="crc32c") 00:37:05.086 | "\(.module_name) \(.executed)"' 00:37:05.086 12:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3885854 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3885854 ']' 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3885854 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885854 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885854' 00:37:05.345 killing process with pid 3885854 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3885854 00:37:05.345 Received shutdown signal, test time was about 2.000000 seconds 00:37:05.345 00:37:05.345 Latency(us) 00:37:05.345 [2024-12-10T11:40:12.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.345 [2024-12-10T11:40:12.171Z] =================================================================================================================== 00:37:05.345 [2024-12-10T11:40:12.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:05.345 12:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3885854 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3886561 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3886561 /var/tmp/bperf.sock 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3886561 ']' 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:06.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:06.281 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:06.282 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:06.282 [2024-12-10 12:40:13.091819] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:06.282 [2024-12-10 12:40:13.091924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886561 ] 00:37:06.282 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:06.282 Zero copy mechanism will not be used. 00:37:06.540 [2024-12-10 12:40:13.203170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.540 [2024-12-10 12:40:13.316394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.108 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:07.108 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:07.108 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:07.108 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:07.108 12:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:07.676 12:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:07.676 12:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:08.243 nvme0n1 00:37:08.243 12:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:08.243 12:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:08.243 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:08.243 Zero copy mechanism will not be used. 00:37:08.243 Running I/O for 2 seconds... 00:37:10.115 5452.00 IOPS, 681.50 MiB/s [2024-12-10T11:40:16.941Z] 5469.50 IOPS, 683.69 MiB/s 00:37:10.115 Latency(us) 00:37:10.115 [2024-12-10T11:40:16.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:10.115 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:10.115 nvme0n1 : 2.00 5469.90 683.74 0.00 0.00 2922.27 514.93 4712.35 00:37:10.115 [2024-12-10T11:40:16.941Z] =================================================================================================================== 00:37:10.115 [2024-12-10T11:40:16.941Z] Total : 5469.90 683.74 0.00 0.00 2922.27 514.93 4712.35 00:37:10.115 { 00:37:10.115 "results": [ 00:37:10.115 { 00:37:10.115 "job": "nvme0n1", 00:37:10.115 "core_mask": "0x2", 00:37:10.115 "workload": "randread", 00:37:10.115 "status": "finished", 00:37:10.115 "queue_depth": 16, 00:37:10.115 "io_size": 131072, 00:37:10.115 "runtime": 2.003326, 00:37:10.115 "iops": 5469.903550395692, 00:37:10.115 "mibps": 683.7379437994615, 00:37:10.115 "io_failed": 0, 00:37:10.115 "io_timeout": 0, 00:37:10.115 "avg_latency_us": 2922.274876020129, 00:37:10.115 "min_latency_us": 514.9257142857143, 00:37:10.115 "max_latency_us": 4712.350476190476 00:37:10.115 } 00:37:10.115 ], 00:37:10.115 "core_count": 1 00:37:10.115 } 00:37:10.374 12:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:10.374 12:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:10.374 12:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:10.374 12:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:10.374 | select(.opcode=="crc32c") 00:37:10.374 | "\(.module_name) \(.executed)"' 00:37:10.374 12:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3886561 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3886561 ']' 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3886561 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:10.374 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3886561 00:37:10.633 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:10.633 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:10.633 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3886561' 00:37:10.633 killing process with pid 3886561 00:37:10.633 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3886561 00:37:10.633 Received shutdown signal, test time was about 2.000000 seconds 00:37:10.633 00:37:10.633 Latency(us) 00:37:10.633 [2024-12-10T11:40:17.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:10.633 [2024-12-10T11:40:17.459Z] =================================================================================================================== 00:37:10.633 [2024-12-10T11:40:17.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:10.633 12:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3886561 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3887445 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3887445 /var/tmp/bperf.sock 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3887445 ']' 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:11.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.570 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.570 [2024-12-10 12:40:18.158259] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:11.570 [2024-12-10 12:40:18.158345] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887445 ] 00:37:11.570 [2024-12-10 12:40:18.270801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.570 [2024-12-10 12:40:18.378617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.136 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:12.136 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:12.136 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:12.136 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:12.136 12:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:12.704 12:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:12.704 12:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:12.963 nvme0n1 00:37:13.222 12:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:13.222 12:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.222 Running I/O for 2 seconds... 00:37:15.092 23077.00 IOPS, 90.14 MiB/s [2024-12-10T11:40:21.918Z] 23378.50 IOPS, 91.32 MiB/s 00:37:15.092 Latency(us) 00:37:15.092 [2024-12-10T11:40:21.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.092 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:15.092 nvme0n1 : 2.01 23382.50 91.34 0.00 0.00 5463.92 3432.84 8800.55 00:37:15.092 [2024-12-10T11:40:21.918Z] =================================================================================================================== 00:37:15.092 [2024-12-10T11:40:21.918Z] Total : 23382.50 91.34 0.00 0.00 5463.92 3432.84 8800.55 00:37:15.092 { 00:37:15.092 "results": [ 00:37:15.092 { 00:37:15.092 "job": "nvme0n1", 00:37:15.092 "core_mask": "0x2", 00:37:15.092 "workload": "randwrite", 00:37:15.092 "status": "finished", 00:37:15.092 "queue_depth": 128, 00:37:15.092 "io_size": 4096, 00:37:15.092 "runtime": 2.006501, 00:37:15.092 "iops": 23382.495199354496, 00:37:15.092 "mibps": 91.3378718724785, 00:37:15.092 "io_failed": 0, 00:37:15.092 "io_timeout": 0, 00:37:15.092 "avg_latency_us": 5463.921662672785, 00:37:15.092 "min_latency_us": 3432.8380952380953, 00:37:15.092 "max_latency_us": 8800.548571428571 00:37:15.092 } 00:37:15.092 ], 00:37:15.092 "core_count": 1 00:37:15.092 } 00:37:15.351 12:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:15.351 12:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:15.351 12:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:15.351 12:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:15.351 | select(.opcode=="crc32c") 00:37:15.351 | "\(.module_name) \(.executed)"' 00:37:15.351 12:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3887445 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3887445 ']' 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3887445 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:15.351 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3887445 00:37:15.610 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:15.610 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:15.610 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3887445' 00:37:15.610 killing process with pid 3887445 00:37:15.610 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3887445 00:37:15.610 Received shutdown signal, test time was about 2.000000 seconds 00:37:15.610 00:37:15.610 Latency(us) 00:37:15.610 [2024-12-10T11:40:22.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.610 [2024-12-10T11:40:22.436Z] =================================================================================================================== 00:37:15.610 [2024-12-10T11:40:22.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:15.610 12:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3887445 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3888157 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3888157 /var/tmp/bperf.sock 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3888157 ']' 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.546 12:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.546 [2024-12-10 12:40:23.110111] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:16.546 [2024-12-10 12:40:23.110212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888157 ] 00:37:16.546 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:16.546 Zero copy mechanism will not be used. 00:37:16.546 [2024-12-10 12:40:23.221029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.546 [2024-12-10 12:40:23.331997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.481 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.481 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:17.481 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:17.481 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:17.481 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:17.740 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:17.740 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.306 nvme0n1 00:37:18.306 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:18.306 12:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:18.306 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:18.306 Zero copy mechanism will not be used. 00:37:18.306 Running I/O for 2 seconds... 00:37:20.176 6071.00 IOPS, 758.88 MiB/s [2024-12-10T11:40:27.002Z] 5906.50 IOPS, 738.31 MiB/s 00:37:20.176 Latency(us) 00:37:20.176 [2024-12-10T11:40:27.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.176 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:20.176 nvme0n1 : 2.00 5904.46 738.06 0.00 0.00 2704.46 2137.72 6054.28 00:37:20.176 [2024-12-10T11:40:27.002Z] =================================================================================================================== 00:37:20.176 [2024-12-10T11:40:27.002Z] Total : 5904.46 738.06 0.00 0.00 2704.46 2137.72 6054.28 00:37:20.176 { 00:37:20.176 "results": [ 00:37:20.176 { 00:37:20.176 "job": "nvme0n1", 00:37:20.176 "core_mask": "0x2", 00:37:20.176 "workload": "randwrite", 00:37:20.176 "status": "finished", 00:37:20.176 "queue_depth": 16, 00:37:20.176 "io_size": 131072, 00:37:20.176 "runtime": 2.004079, 00:37:20.176 "iops": 5904.457858198205, 00:37:20.176 "mibps": 738.0572322747756, 00:37:20.176 "io_failed": 0, 00:37:20.176 "io_timeout": 0, 00:37:20.176 "avg_latency_us": 2704.464055566958, 00:37:20.176 "min_latency_us": 2137.7219047619046, 00:37:20.176 "max_latency_us": 6054.278095238095 00:37:20.176 } 00:37:20.176 ], 00:37:20.176 "core_count": 1 00:37:20.176 } 00:37:20.176 12:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:20.176 12:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:20.176 12:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:20.176 12:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:20.176 | select(.opcode=="crc32c") 00:37:20.176 | "\(.module_name) \(.executed)"' 00:37:20.176 12:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:20.434 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:20.434 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:20.434 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:20.434 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:20.434 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3888157 00:37:20.434 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3888157 ']' 00:37:20.434 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3888157 00:37:20.434 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:20.435 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.435 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3888157 00:37:20.435 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:20.435 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:20.435 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3888157' 00:37:20.435 killing process with pid 3888157 00:37:20.435 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3888157 00:37:20.435 Received shutdown signal, test time was about 2.000000 seconds 00:37:20.435 00:37:20.435 Latency(us) 00:37:20.435 [2024-12-10T11:40:27.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.435 [2024-12-10T11:40:27.261Z] =================================================================================================================== 00:37:20.435 [2024-12-10T11:40:27.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:20.435 12:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3888157 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3885618 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3885618 ']' 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3885618 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885618 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885618' 00:37:21.371 killing process with pid 3885618 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3885618 00:37:21.371 12:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3885618 00:37:22.747 00:37:22.747 real 0m22.600s 00:37:22.747 user 0m42.541s 00:37:22.747 sys 0m4.940s 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:22.747 ************************************ 00:37:22.747 END TEST nvmf_digest_clean 00:37:22.747 ************************************ 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:22.747 ************************************ 00:37:22.747 START TEST nvmf_digest_error 00:37:22.747 ************************************ 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3889276 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3889276 00:37:22.747 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:22.748 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3889276 ']' 00:37:22.748 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.748 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:22.748 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.748 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:22.748 12:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:22.748 [2024-12-10 12:40:29.376926] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:22.748 [2024-12-10 12:40:29.377035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:22.748 [2024-12-10 12:40:29.492819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.006 [2024-12-10 12:40:29.597839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.006 [2024-12-10 12:40:29.597878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.006 [2024-12-10 12:40:29.597888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.006 [2024-12-10 12:40:29.597914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.006 [2024-12-10 12:40:29.597922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.006 [2024-12-10 12:40:29.599432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:23.573 [2024-12-10 12:40:30.229586] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.573 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:23.832 null0 00:37:23.832 [2024-12-10 12:40:30.578247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.832 [2024-12-10 12:40:30.602479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3889524 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3889524 /var/tmp/bperf.sock 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3889524 ']' 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:23.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.832 12:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.091 [2024-12-10 12:40:30.680213] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:24.091 [2024-12-10 12:40:30.680301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889524 ] 00:37:24.091 [2024-12-10 12:40:30.790138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.091 [2024-12-10 12:40:30.904460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.026 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.026 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:25.026 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:25.026 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:25.027 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:25.027 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.027 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:25.027 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.027 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:25.027 12:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:25.285 nvme0n1 00:37:25.285 12:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:25.285 12:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.285 12:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:25.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:25.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:25.544 Running I/O for 2 seconds... 00:37:25.544 [2024-12-10 12:40:32.229480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.229522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.229540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.243686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.243722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.243736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.258021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.258049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.258062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.272289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.272317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.272330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.285729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.285757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.285769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.296009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.296036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.296049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.310582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.310609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.310621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.322159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.322193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.322205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.332197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.332223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.332235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.344792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.344825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.344837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.544 [2024-12-10 12:40:32.358682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.544 [2024-12-10 12:40:32.358710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.544 [2024-12-10 12:40:32.358730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.803 [2024-12-10 12:40:32.373320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.803 [2024-12-10 12:40:32.373348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.373361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.382871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.382896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.382908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.396352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.396378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.396390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.410500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.410526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.410538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.423222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.423249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.423261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.433322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.433348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.433360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.446942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.446969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.446986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.461705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.461732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.461743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.471135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.471161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.471180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.485130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.485157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.485174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.498436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.498463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.498475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.512611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.512637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.512649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.522735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.522761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.522772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.536562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.536588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.536600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.549820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.549846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.549858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.559586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.559621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.559633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.574178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.574205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.574217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.588123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.588150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.588161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.602347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.602373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.602385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.614414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.614444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.614456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.804 [2024-12-10 12:40:32.623920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.804 [2024-12-10 12:40:32.623947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.804 [2024-12-10 12:40:32.623959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.063 [2024-12-10 12:40:32.635687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.063 [2024-12-10 12:40:32.635716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.063 [2024-12-10 12:40:32.635730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.063 [2024-12-10 12:40:32.646107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.063 [2024-12-10 12:40:32.646136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.063 [2024-12-10 12:40:32.646148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.063 [2024-12-10 12:40:32.658546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.063 [2024-12-10 12:40:32.658575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.063 [2024-12-10 12:40:32.658586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.063 [2024-12-10 12:40:32.673184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.063 [2024-12-10 12:40:32.673211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.063 [2024-12-10 12:40:32.673222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.063 [2024-12-10 12:40:32.687155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.063 [2024-12-10 12:40:32.687188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.063 [2024-12-10 12:40:32.687201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.063 [2024-12-10 12:40:32.696272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.063 [2024-12-10 12:40:32.696297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.063 [2024-12-10 12:40:32.696308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.063 [2024-12-10 12:40:32.708597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.063 [2024-12-10 12:40:32.708623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.063 [2024-12-10 12:40:32.708635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.063 [2024-12-10 12:40:32.720486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.720512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.720524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.731154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.731187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.731199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.741653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.741679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.741691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.755802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.755830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.755841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.765309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.765339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.765351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.779171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.779198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.779209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.793394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.793420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.793432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.807893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.807920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.807931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.821441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.821466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.821478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.835525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.835552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.835564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.845065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.845091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.845103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.859145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.859178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.859191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.873512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.873540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.873552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.064 [2024-12-10 12:40:32.883033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.064 [2024-12-10 12:40:32.883059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.064 [2024-12-10 12:40:32.883071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.897404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.897438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.897450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.911277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.911305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.911316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.924355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.924383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.924395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.934372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.934399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.934411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.948041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.948069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.948081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.958812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.958839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.958850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.972215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.972243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.972254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.984651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.984686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.984698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:32.997334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:32.997361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:32.997373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:33.007276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:33.007304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:33.007316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:33.020566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:33.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:33.020605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:33.033159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:33.033192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:33.033204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:33.042526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:33.042553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:33.042565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:33.057498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:33.057524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:33.057536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:33.069774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:33.069801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:33.069813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.323 [2024-12-10 12:40:33.083927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.323 [2024-12-10 12:40:33.083957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.323 [2024-12-10 12:40:33.083969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.324 [2024-12-10 12:40:33.095461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.324 [2024-12-10 12:40:33.095488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.324 [2024-12-10 12:40:33.095500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.324 [2024-12-10 12:40:33.105186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.324 [2024-12-10 12:40:33.105213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.324 [2024-12-10 12:40:33.105225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.324 [2024-12-10 12:40:33.117849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.324 [2024-12-10 12:40:33.117877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.324 [2024-12-10 12:40:33.117889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.324 [2024-12-10 12:40:33.127777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.324 [2024-12-10 12:40:33.127804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.324 [2024-12-10 12:40:33.127815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.324 [2024-12-10 12:40:33.139582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.324 [2024-12-10 12:40:33.139608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.324 [2024-12-10 12:40:33.139619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.150717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.150746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.150758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.161809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.161837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.161849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.172704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.172731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.172743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.183640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.183673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.183684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.196858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.196885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.196898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.206829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.206855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.206867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 20439.00 IOPS, 79.84 MiB/s [2024-12-10T11:40:33.409Z] [2024-12-10 12:40:33.219981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.220009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.220021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.231285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.231311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.231323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.242273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.242300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.242312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.251799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.251826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.251839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.263824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.263851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.263863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.274271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.274299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.274311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.284948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.284976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.284989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.296225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.296253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.296265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.306038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.306065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.306077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.316626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.316654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.316665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.327709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.327736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.327747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.338966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.338992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.339004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.583 [2024-12-10 12:40:33.348477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.583 [2024-12-10 12:40:33.348504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.583 [2024-12-10 12:40:33.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.584 [2024-12-10 12:40:33.358651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.584 [2024-12-10 12:40:33.358678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.584 [2024-12-10 12:40:33.358690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.584 [2024-12-10 12:40:33.370871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.584 [2024-12-10 12:40:33.370902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.584 [2024-12-10 12:40:33.370915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.584 [2024-12-10 12:40:33.380631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.584 [2024-12-10 12:40:33.380657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.584 [2024-12-10 12:40:33.380668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.584 [2024-12-10 12:40:33.392356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.584 [2024-12-10 12:40:33.392382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.584 [2024-12-10 12:40:33.392395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.584 [2024-12-10 12:40:33.401883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.584 [2024-12-10 12:40:33.401910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.584 [2024-12-10 12:40:33.401922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.414359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.414385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.414397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.426795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.426822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.426834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.441596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.441622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.441634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.450561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.450587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.450598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.463022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.463048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.463060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.474163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.474195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.474207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.483587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.483615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.483627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.495066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.495093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.495104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.506159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.506193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.506205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.517497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.517523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.517534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.528708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.528734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.528746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.538125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.538151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.538163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.550092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.550119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.550131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.561281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.843 [2024-12-10 12:40:33.561310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.843 [2024-12-10 12:40:33.561322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.843 [2024-12-10 12:40:33.571267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.571293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.571305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.844 [2024-12-10 12:40:33.582687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.582713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.582725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.844 [2024-12-10 12:40:33.592751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.592777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.592790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.844 [2024-12-10 12:40:33.603302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.603328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.603339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.844 [2024-12-10 12:40:33.614528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.614554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.614566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.844 [2024-12-10 12:40:33.625796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.625823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.625835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.844 [2024-12-10 12:40:33.635791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.635818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.635830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.844 [2024-12-10 12:40:33.647637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.647663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.647674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.844 [2024-12-10 12:40:33.660794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.844 [2024-12-10 12:40:33.660822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.844 [2024-12-10 12:40:33.660833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.672336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.672364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.672376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.681631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.681659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.681670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.692942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.692968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.692980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.703902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.703930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.703942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.716520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.716547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.716559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.727358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.727384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.727395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.740575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.740603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.740614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.751720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.751752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.751763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.761561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.761588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.761600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.772861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.772888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.772899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.784422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.784448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.784459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.795550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.795576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.795588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.805464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.805491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.103 [2024-12-10 12:40:33.805503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.103 [2024-12-10 12:40:33.817090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.103 [2024-12-10 12:40:33.817116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.817128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.827173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.827198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.827210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.839596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.839623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.839635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.848729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.848754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.848766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.861918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.861945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.861964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.874636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.874663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.874675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.883771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.883798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.883809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.894814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.894840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.894852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.905496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.905522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.905535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.916010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.916036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.916047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.104 [2024-12-10 12:40:33.926284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.104 [2024-12-10 12:40:33.926311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.104 [2024-12-10 12:40:33.926323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.363 [2024-12-10 12:40:33.936711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.363 [2024-12-10 12:40:33.936738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.363 [2024-12-10 12:40:33.936754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.363 [2024-12-10 12:40:33.947204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.363 [2024-12-10 12:40:33.947231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.363 [2024-12-10 12:40:33.947242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.363 [2024-12-10 12:40:33.957724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.363 [2024-12-10 12:40:33.957750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.363 [2024-12-10 12:40:33.957762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.363 [2024-12-10 12:40:33.968322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.363 [2024-12-10 12:40:33.968349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.363 [2024-12-10 12:40:33.968360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.363 [2024-12-10 12:40:33.979952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.363 [2024-12-10 12:40:33.979979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.363 [2024-12-10 12:40:33.979991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.363 [2024-12-10 12:40:33.991737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.363 [2024-12-10 12:40:33.991766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.363 [2024-12-10 12:40:33.991778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.363 [2024-12-10 12:40:34.003163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.363 [2024-12-10 12:40:34.003197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.363 [2024-12-10 12:40:34.003209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.363 [2024-12-10 12:40:34.014025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.363 [2024-12-10 12:40:34.014052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.014064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.025509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.025537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.025549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.034953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.034979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.034990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.048771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.048797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.048808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.062709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.062736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.062747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.076788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.076815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.076827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.089555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.089581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.089593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.101818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.101844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.101856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.113145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.113179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.113192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.123691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.123716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.123727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.134618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.134644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.134660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.144157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.144189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.144201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.156498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.156524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.156535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.165825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.165852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.165864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.178219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.178246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.178258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.364 [2024-12-10 12:40:34.188195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.364 [2024-12-10 12:40:34.188223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.364 [2024-12-10 12:40:34.188235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.623 [2024-12-10 12:40:34.198889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.623 [2024-12-10 12:40:34.198916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.623 [2024-12-10 12:40:34.198927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.623 [2024-12-10 12:40:34.209492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.623 [2024-12-10 12:40:34.209519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.623 [2024-12-10 12:40:34.209531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.623 21690.00 IOPS, 84.73 MiB/s 00:37:27.623 Latency(us) 00:37:27.623 [2024-12-10T11:40:34.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.623 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:27.623 nvme0n1 : 2.00 21700.31 84.77 0.00 0.00 5892.92 3151.97 21096.35 00:37:27.623 [2024-12-10T11:40:34.449Z] =================================================================================================================== 00:37:27.623 [2024-12-10T11:40:34.449Z] Total : 21700.31 84.77 0.00 0.00 5892.92 3151.97 21096.35 00:37:27.623 { 00:37:27.623 "results": [ 00:37:27.623 { 00:37:27.623 "job": "nvme0n1", 00:37:27.623 "core_mask": "0x2", 00:37:27.623 "workload": "randread", 00:37:27.623 "status": "finished", 00:37:27.623 "queue_depth": 128, 00:37:27.623 "io_size": 4096, 00:37:27.623 "runtime": 2.004948, 00:37:27.623 "iops": 21700.31342458757, 00:37:27.623 "mibps": 84.7668493147952, 00:37:27.623 "io_failed": 0, 00:37:27.623 "io_timeout": 0, 00:37:27.623 "avg_latency_us": 5892.918465438212, 00:37:27.623 "min_latency_us": 3151.9695238095237, 00:37:27.623 "max_latency_us": 21096.350476190477 00:37:27.623 } 00:37:27.623 ], 00:37:27.623 "core_count": 1 00:37:27.623 } 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:27.623 | .driver_specific 00:37:27.623 | .nvme_error 00:37:27.623 | .status_code 00:37:27.623 | .command_transient_transport_error' 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3889524 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3889524 ']' 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3889524 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.623 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3889524 00:37:27.882 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:27.882 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:27.882 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3889524' 00:37:27.882 killing process with pid 3889524 00:37:27.882 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3889524 00:37:27.882 Received shutdown signal, test time was about 2.000000 seconds 00:37:27.882 00:37:27.882 Latency(us) 00:37:27.882 [2024-12-10T11:40:34.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.882 [2024-12-10T11:40:34.708Z] =================================================================================================================== 00:37:27.882 [2024-12-10T11:40:34.708Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:27.882 12:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3889524 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3890206 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3890206 /var/tmp/bperf.sock 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3890206 ']' 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:28.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:28.821 12:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:28.821 [2024-12-10 12:40:35.448743] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:28.821 [2024-12-10 12:40:35.448830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890206 ] 00:37:28.821 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:28.821 Zero copy mechanism will not be used. 00:37:28.821 [2024-12-10 12:40:35.560291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.080 [2024-12-10 12:40:35.669541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:29.647 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:29.906 nvme0n1 00:37:30.165 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:30.165 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.165 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:30.165 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.165 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:30.165 12:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:30.165 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:30.165 Zero copy mechanism will not be used. 00:37:30.165 Running I/O for 2 seconds... 00:37:30.165 [2024-12-10 12:40:36.844622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.165 [2024-12-10 12:40:36.844666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.165 [2024-12-10 12:40:36.844681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.165 [2024-12-10 12:40:36.850935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.165 [2024-12-10 12:40:36.850969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.165 [2024-12-10 12:40:36.850983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.165 [2024-12-10 12:40:36.856844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.165 [2024-12-10 12:40:36.856874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.165 [2024-12-10 12:40:36.856886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.165 [2024-12-10 12:40:36.862802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.862831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.862844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.869030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.869060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.869073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.875215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.875243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.875256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.881260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.881289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.881302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.887185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.887213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.887225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.893568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.893597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.893614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.899664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.899692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.899728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.905770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.905799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.905811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.911720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.911750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.911762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.917826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.917855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.917868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.924018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.924046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.924058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.930036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.930064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.930076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.936195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.936224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.936236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.942430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.942458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.942470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.948673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.948702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.948714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.954700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.954729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.954740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.960774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.960802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.960814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.966908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.966936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.966948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.972971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.972999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.973011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.978872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.978900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.978912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.166 [2024-12-10 12:40:36.984907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.166 [2024-12-10 12:40:36.984935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.166 [2024-12-10 12:40:36.984948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.426 [2024-12-10 12:40:36.991071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.426 [2024-12-10 12:40:36.991100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.426 [2024-12-10 12:40:36.991113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.426 [2024-12-10 12:40:36.997506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.426 [2024-12-10 12:40:36.997533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.426 [2024-12-10 12:40:36.997550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.426 [2024-12-10 12:40:37.003450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.426 [2024-12-10 12:40:37.003478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.426 [2024-12-10 12:40:37.003490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.426 [2024-12-10 12:40:37.009573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.426 [2024-12-10 12:40:37.009600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.426 [2024-12-10 12:40:37.009612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.426 [2024-12-10 12:40:37.015417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.426 [2024-12-10 12:40:37.015444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.426 [2024-12-10 12:40:37.015457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.021394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.021422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.021434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.027485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.027511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.027523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.033662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.033690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.033702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.039502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.039529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.039541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.045439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.045466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.045477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.051340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.051367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.051380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.057090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.057116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.057128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.060190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.060217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.060228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.066408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.066433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.066445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.073032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.073060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.073071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.080384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.080412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.080424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.087529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.087557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.087569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.095017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.095046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.095058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.103438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.103468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.103486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.111183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.111214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.111227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.118997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.119025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.119037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.124998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.125027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.125039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.131088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.131115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.131128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.137067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.137095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.137106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.143153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.143186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.143215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.149265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.149292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.149304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.155359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.155386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.155398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.161848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.161875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.161887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.168529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.168564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.168576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.174802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.174829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.174841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.180633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.180660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.180672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.186746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.186773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.427 [2024-12-10 12:40:37.186784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.427 [2024-12-10 12:40:37.192878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.427 [2024-12-10 12:40:37.192905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.192917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.199258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.199284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.199295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.202694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.202720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.202731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.208553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.208581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.208597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.214650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.214677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.214689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.220853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.220881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.220893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.227103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.227130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.227142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.233268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.233296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.233308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.239347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.239374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.239386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.428 [2024-12-10 12:40:37.245529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.428 [2024-12-10 12:40:37.245556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.428 [2024-12-10 12:40:37.245568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.251441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.251471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.251483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.257204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.257231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.257243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.263000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.263032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.263044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.268813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.268839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.268851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.274552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.274581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.274594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.280287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.280315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.280326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.286100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.286130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.286142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.292224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.292250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.292262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.298175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.298201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.298213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.304046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.304073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.304084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.309665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.309692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.309708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.315435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.315462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.315473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.321426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.321454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.321466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.327366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.327394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.327405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.332747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.332773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.332784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.338785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.338811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.338823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.688 [2024-12-10 12:40:37.344740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.688 [2024-12-10 12:40:37.344768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.688 [2024-12-10 12:40:37.344779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.350435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.350462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.350474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.356215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.356242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.356254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.362122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.362154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.362172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.368117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.368144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.368156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.374089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.374115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.374126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.380011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.380039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.380051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.385687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.385715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.385726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.391373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.391400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.391412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.397051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.397078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.397090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.402933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.402961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.402973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.408680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.408707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.408726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.414590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.414618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.414630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.420194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.420221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.420234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.426179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.426206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.426218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.432198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.432226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.432238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.437984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.438011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.438023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.443912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.443939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.443951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.447846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.447872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.447884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.452397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.452423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.452435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.458162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.458199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.458212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.463843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.463870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.463882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.469615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.469642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.469653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.474980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.475007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.475019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.480461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.480488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.480500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.485977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.486004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.486015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.491555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.491583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.491597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.497237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.689 [2024-12-10 12:40:37.497263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.689 [2024-12-10 12:40:37.497274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.689 [2024-12-10 12:40:37.502713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.690 [2024-12-10 12:40:37.502740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.690 [2024-12-10 12:40:37.502757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.690 [2024-12-10 12:40:37.508400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.690 [2024-12-10 12:40:37.508427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.690 [2024-12-10 12:40:37.508439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.949 [2024-12-10 12:40:37.514175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.949 [2024-12-10 12:40:37.514202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.949 [2024-12-10 12:40:37.514214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.520108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.520136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.520147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.526070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.526098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.526109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.532095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.532122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.532134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.538335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.538362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.538374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.544721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.544748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.544759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.550567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.550594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.550605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.556050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.556084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.556095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.561649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.561676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.561688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.567182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.567209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.567221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.572724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.572750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.572762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.578344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.578371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.578383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.584228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.584255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.584266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.590225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.590252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.590264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.596271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.596299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.596311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.602235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.602262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.602273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.607989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.608016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.608028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.611104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.611131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.611143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.617017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.617045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.617057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.622970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.622996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.623008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.628862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.628889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.628901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.634869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.634895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.634907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.640863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.640889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.640901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.646695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.646720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.652627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.652658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.652669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.658395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.658421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.658440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.664394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.664420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.664432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.950 [2024-12-10 12:40:37.669575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.950 [2024-12-10 12:40:37.669603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.950 [2024-12-10 12:40:37.669614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.675596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.675624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.675640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.681487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.681515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.681526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.686588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.686615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.686626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.692079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.692105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.692116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.697559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.697586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.697598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.703269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.703295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.703307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.709175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.709201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.709213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.715222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.715249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.715261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.721253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.721279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.721290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.727145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.727177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.727189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.733172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.733198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.733209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.738430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.738456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.738468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.741438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.741464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.741476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.746877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.746903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.746919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.752435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.752462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.752473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.757979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.758006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.758017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.763143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.763175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.763188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.768379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.768407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.768419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.951 [2024-12-10 12:40:37.774086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.951 [2024-12-10 12:40:37.774115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.951 [2024-12-10 12:40:37.774127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.779972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.780001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.780013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.785736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.785763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.785774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.791542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.791569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.791581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.797319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.797345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.797357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.803275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.803302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.803313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.809322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.809348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.809360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.815180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.815206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.815217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.821121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.821147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.821159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.827001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.827029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.827040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.832846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.832872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.832885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.211 [2024-12-10 12:40:37.838966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.211 [2024-12-10 12:40:37.838994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.211 [2024-12-10 12:40:37.839005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.211 5252.00 IOPS, 656.50 MiB/s [2024-12-10T11:40:38.037Z] [2024-12-10 12:40:37.846457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.846491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.846503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.852467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.852494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.852506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.858548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.858576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.858588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.864449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.864475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.864487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.870454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.870481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.870492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.877421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.877450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.877462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.884161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.884194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.884206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.888550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.888577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.888588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.895677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.895705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.895717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.903376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.903403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.903415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.911026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.911060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.911073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.917804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.917831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.917843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.925205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.925233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.925245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.931766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.931792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.931805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.937916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.937942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.937954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.944260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.944287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.944299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.949822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.949847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.949859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.955445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.955470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.955486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.961053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.961079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.961091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.966703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.966728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.966740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.972369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.972394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.972406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.978038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.978064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.978075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.983664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.983690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.983702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.989279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.989304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.989316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:37.994864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:37.994890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:37.994901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:38.000517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:38.000542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:38.000554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:38.006204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.212 [2024-12-10 12:40:38.006230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.212 [2024-12-10 12:40:38.006241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.212 [2024-12-10 12:40:38.011977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.213 [2024-12-10 12:40:38.012003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.213 [2024-12-10 12:40:38.012014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.213 [2024-12-10 12:40:38.017609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.213 [2024-12-10 12:40:38.017635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.213 [2024-12-10 12:40:38.017647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.213 [2024-12-10 12:40:38.023218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.213 [2024-12-10 12:40:38.023244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.213 [2024-12-10 12:40:38.023256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.213 [2024-12-10 12:40:38.028809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.213 [2024-12-10 12:40:38.028834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.213 [2024-12-10 12:40:38.028846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.213 [2024-12-10 12:40:38.034475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.213 [2024-12-10 12:40:38.034502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.213 [2024-12-10 12:40:38.034514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.040320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.040347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.040358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.045907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.045933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.045945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.051519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.051544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.051559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.057251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.057277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.057288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.063009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.063035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.063046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.068635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.068660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.068671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.074211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.074236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.074247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.079805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.079830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.079842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.085423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.085449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.085461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.090993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.091018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.091029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.096592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.096617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.096628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.102174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.102200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.102211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.107896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.107921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.107932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.113504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.113530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.113542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.119173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.119199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.119211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.124760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.124785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.124797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.130465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.473 [2024-12-10 12:40:38.130490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.473 [2024-12-10 12:40:38.130502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.473 [2024-12-10 12:40:38.136117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.136142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.136154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.141683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.141708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.141720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.147367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.147392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.147408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.152951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.152976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.152987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.158607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.158632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.158643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.164423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.164450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.164462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.170041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.170066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.170078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.175721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.175747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.175758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.181365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.181390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.181402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.186964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.186990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.187002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.192739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.192764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.192775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.198393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.198419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.198430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.204140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.204165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.204200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.209761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.209789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.209801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.215555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.215581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.215593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.221292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.221318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.221329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.227068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.227094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.227106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.232692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.232717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.232729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.238275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.238299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.238311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.243869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.243895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.243910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.249501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.249526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.249538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.255105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.255130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.255141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.260764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.260789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.260801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.266382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.266407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.266418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.271928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.271954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.271966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.277683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.277709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.277720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.283314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.474 [2024-12-10 12:40:38.283340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.474 [2024-12-10 12:40:38.283351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.474 [2024-12-10 12:40:38.288929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.475 [2024-12-10 12:40:38.288954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.475 [2024-12-10 12:40:38.288965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.475 [2024-12-10 12:40:38.294539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.475 [2024-12-10 12:40:38.294571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.475 [2024-12-10 12:40:38.294583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.300298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.300324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.300335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.305950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.305977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.305989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.311717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.311742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.311753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.317335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.317359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.317371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.322932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.322958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.322969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.328584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.328610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.328622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.334236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.334261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.334272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.339888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.339913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.339928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.345489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.345515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.345527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.351121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.351146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.351157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.356723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.356748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.356759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.362418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.362444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.362456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.368097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.368124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.368135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.373881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.373906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.373918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.379559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.379584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.379595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.385243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.385269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.385281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.390902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.390935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.390947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.396522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.396547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.396566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.402173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.402198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.402209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.407774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.407800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.407811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.413494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.413520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.413532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.419239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.419265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.419277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.424814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.424841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.424852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.735 [2024-12-10 12:40:38.430585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.735 [2024-12-10 12:40:38.430611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.735 [2024-12-10 12:40:38.430623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.436150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.436181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.436197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.441854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.441879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.441890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.447499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.447524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.447536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.453119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.453144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.453155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.458724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.458749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.458760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.464349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.464374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.464385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.469985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.470009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.470020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.475603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.475628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.475639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.481115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.481141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.481152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.486902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.486931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.486943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.492481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.492507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.492519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.498147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.498178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.498191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.503865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.503891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.503902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.509643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.509669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.509680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.515374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.515399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.515410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.521082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.521109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.521121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.526745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.526770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.526782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.532321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.532347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.532361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.537977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.538002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.538013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.543711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.543736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.543748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.549469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.549496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.549507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.736 [2024-12-10 12:40:38.555228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.736 [2024-12-10 12:40:38.555254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.736 [2024-12-10 12:40:38.555266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.560964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.560990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.561002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.566780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.566806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.566818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.572548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.572573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.572584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.578298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.578325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.578336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.583946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.583976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.583988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.589699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.589726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.589737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.595455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.595482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.595493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.601131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.601157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.601174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.606796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.606822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.606833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.612578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.612603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.612614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.618271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.618297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.618308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.624037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.624063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.624074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.996 [2024-12-10 12:40:38.629732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.996 [2024-12-10 12:40:38.629758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.996 [2024-12-10 12:40:38.629774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.635502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.635528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.635540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.641189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.641221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.641233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.646890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.646915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.646927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.652575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.652601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.652612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.658287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.658313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.658325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.664019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.664044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.664056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.669703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.669730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.669742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.675490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.675516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.675528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.681144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.681180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.681192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.686916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.686941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.686953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.692495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.692521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.692532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.698297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.698323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.698335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.703936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.703962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.703974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.709604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.709631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.709643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.715277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.715303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.715314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.720871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.720896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.720908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.726486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.726512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.726524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.732181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.732209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.732221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.737722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.737749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.737760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.743578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.743606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.743618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.749304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.749330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.749341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.754897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.754924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.754935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.760633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.760659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.760670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.766221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.766248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.766259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.771991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.772017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.772029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.777668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.777699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.997 [2024-12-10 12:40:38.777711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.997 [2024-12-10 12:40:38.783418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.997 [2024-12-10 12:40:38.783445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.998 [2024-12-10 12:40:38.783459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.998 [2024-12-10 12:40:38.789272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.998 [2024-12-10 12:40:38.789297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.998 [2024-12-10 12:40:38.789308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.998 [2024-12-10 12:40:38.794931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.998 [2024-12-10 12:40:38.794958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.998 [2024-12-10 12:40:38.794970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.998 [2024-12-10 12:40:38.800592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.998 [2024-12-10 12:40:38.800619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.998 [2024-12-10 12:40:38.800631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.998 [2024-12-10 12:40:38.806227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.998 [2024-12-10 12:40:38.806253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.998 [2024-12-10 12:40:38.806265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.998 [2024-12-10 12:40:38.812035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.998 [2024-12-10 12:40:38.812061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.998 [2024-12-10 12:40:38.812072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.998 [2024-12-10 12:40:38.817748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.998 [2024-12-10 12:40:38.817776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.998 [2024-12-10 12:40:38.817788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.257 [2024-12-10 12:40:38.823559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.257 [2024-12-10 12:40:38.823587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.257 [2024-12-10 12:40:38.823599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.257 [2024-12-10 12:40:38.829190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.257 [2024-12-10 12:40:38.829218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.257 [2024-12-10 12:40:38.829229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.257 [2024-12-10 12:40:38.834901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.257 [2024-12-10 12:40:38.834929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.257 [2024-12-10 12:40:38.834940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.257 [2024-12-10 12:40:38.840637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.257 [2024-12-10 12:40:38.840664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.257 [2024-12-10 12:40:38.840675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.257 5325.00 IOPS, 665.62 MiB/s [2024-12-10T11:40:39.083Z] [2024-12-10 12:40:38.847225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.257 [2024-12-10 12:40:38.847252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.257 [2024-12-10 12:40:38.847264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.257 00:37:32.257 Latency(us) 00:37:32.257 [2024-12-10T11:40:39.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.257 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:32.257 nvme0n1 : 2.00 5323.26 665.41 0.00 0.00 3002.14 702.17 8862.96 00:37:32.257 [2024-12-10T11:40:39.083Z] =================================================================================================================== 00:37:32.257 [2024-12-10T11:40:39.083Z] Total : 5323.26 665.41 0.00 0.00 3002.14 702.17 8862.96 00:37:32.257 { 00:37:32.257 "results": [ 00:37:32.257 { 00:37:32.257 "job": "nvme0n1", 00:37:32.257 "core_mask": "0x2", 00:37:32.257 "workload": "randread", 00:37:32.257 "status": "finished", 00:37:32.257 "queue_depth": 16, 00:37:32.257 "io_size": 131072, 00:37:32.257 "runtime": 2.003658, 00:37:32.257 "iops": 5323.263750600152, 00:37:32.257 "mibps": 665.407968825019, 00:37:32.257 "io_failed": 0, 00:37:32.257 "io_timeout": 0, 00:37:32.257 "avg_latency_us": 3002.136890698526, 00:37:32.257 "min_latency_us": 702.1714285714286, 00:37:32.257 "max_latency_us": 8862.96380952381 00:37:32.257 } 00:37:32.257 ], 00:37:32.257 "core_count": 1 00:37:32.257 } 00:37:32.257 12:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:32.257 12:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:32.257 12:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:32.257 | .driver_specific 00:37:32.257 | .nvme_error 00:37:32.257 | .status_code 00:37:32.257 | .command_transient_transport_error' 00:37:32.257 12:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:32.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 345 > 0 )) 00:37:32.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3890206 00:37:32.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3890206 ']' 00:37:32.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3890206 00:37:32.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:32.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3890206 00:37:32.516 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:32.516 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:32.516 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3890206' 00:37:32.516 killing process with pid 3890206 00:37:32.516 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3890206 00:37:32.516 Received shutdown signal, test time was about 2.000000 seconds 00:37:32.516 00:37:32.516 Latency(us) 00:37:32.516 [2024-12-10T11:40:39.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.516 [2024-12-10T11:40:39.342Z] =================================================================================================================== 00:37:32.516 [2024-12-10T11:40:39.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.516 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3890206 00:37:33.531 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:33.531 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:33.531 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:33.531 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:33.531 12:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3890895 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3890895 /var/tmp/bperf.sock 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3890895 ']' 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:33.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.531 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.531 [2024-12-10 12:40:40.074895] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:33.531 [2024-12-10 12:40:40.074983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890895 ] 00:37:33.531 [2024-12-10 12:40:40.188318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.531 [2024-12-10 12:40:40.298623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.124 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.124 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:34.125 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:34.125 12:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:34.383 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:34.383 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.383 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.383 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.383 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:34.384 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:34.642 nvme0n1 00:37:34.642 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:34.642 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.642 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.642 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.642 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:34.642 12:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:34.642 Running I/O for 2 seconds... 00:37:34.902 [2024-12-10 12:40:41.483357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:34.902 [2024-12-10 12:40:41.484655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.484691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.494435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:34.902 [2024-12-10 12:40:41.495914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.495943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.504035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:34.902 [2024-12-10 12:40:41.505516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.505543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.513072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:37:34.902 [2024-12-10 12:40:41.513823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.513852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.524025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:34.902 [2024-12-10 12:40:41.524833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.524859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.534966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:34.902 [2024-12-10 12:40:41.535901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.535926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.545528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:34.902 [2024-12-10 12:40:41.546506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.546532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.557958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:34.902 [2024-12-10 12:40:41.559486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.559516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.567069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:34.902 [2024-12-10 12:40:41.567787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.567813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.577718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:34.902 [2024-12-10 12:40:41.578648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.578674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.587290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:34.902 [2024-12-10 12:40:41.588262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.588288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.597675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:34.902 [2024-12-10 12:40:41.598376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.598402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.608219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:34.902 [2024-12-10 12:40:41.608909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.608935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.618817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:34.902 [2024-12-10 12:40:41.619483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.619508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.628448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:34.902 [2024-12-10 12:40:41.629094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.902 [2024-12-10 12:40:41.629118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:34.902 [2024-12-10 12:40:41.639034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:34.902 [2024-12-10 12:40:41.639763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.639789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:34.903 [2024-12-10 12:40:41.649179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:37:34.903 [2024-12-10 12:40:41.649910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.649936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.903 [2024-12-10 12:40:41.660099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:34.903 [2024-12-10 12:40:41.660992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.661018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:34.903 [2024-12-10 12:40:41.672747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:34.903 [2024-12-10 12:40:41.674025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.674051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:34.903 [2024-12-10 12:40:41.682633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:34.903 [2024-12-10 12:40:41.683935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.683960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:34.903 [2024-12-10 12:40:41.693567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:34.903 [2024-12-10 12:40:41.695018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.695044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:34.903 [2024-12-10 12:40:41.704140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:34.903 [2024-12-10 12:40:41.705224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.705249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:34.903 [2024-12-10 12:40:41.714453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:34.903 [2024-12-10 12:40:41.715602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.715627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:34.903 [2024-12-10 12:40:41.724877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:34.903 [2024-12-10 12:40:41.726055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:34.903 [2024-12-10 12:40:41.726081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:35.162 [2024-12-10 12:40:41.735532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:35.162 [2024-12-10 12:40:41.736665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.162 [2024-12-10 12:40:41.736691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:35.162 [2024-12-10 12:40:41.746153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:35.162 [2024-12-10 12:40:41.747340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.162 [2024-12-10 12:40:41.747366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:35.162 [2024-12-10 12:40:41.755875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:35.162 [2024-12-10 12:40:41.757341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.162 [2024-12-10 12:40:41.757365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:35.162 [2024-12-10 12:40:41.765052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:35.162 [2024-12-10 12:40:41.765815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.162 [2024-12-10 12:40:41.765840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:35.162 [2024-12-10 12:40:41.777639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:35.162 [2024-12-10 12:40:41.778843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.162 [2024-12-10 12:40:41.778868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:35.162 [2024-12-10 12:40:41.787800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:35.162 [2024-12-10 12:40:41.788652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.162 [2024-12-10 12:40:41.788681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:35.162 [2024-12-10 12:40:41.797637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:35.162 [2024-12-10 12:40:41.799211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.162 [2024-12-10 12:40:41.799236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.807559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:35.163 [2024-12-10 12:40:41.808260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.808285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.818387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:37:35.163 [2024-12-10 12:40:41.819380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.819405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.828980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:35.163 [2024-12-10 12:40:41.830025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.830051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.839132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:37:35.163 [2024-12-10 12:40:41.840135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.840160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.849689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:35.163 [2024-12-10 12:40:41.850252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.850277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.860325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:35.163 [2024-12-10 12:40:41.861133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.861158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.870011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:35.163 [2024-12-10 12:40:41.870807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.870831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.880952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:35.163 [2024-12-10 12:40:41.881879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.881905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.891825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:35.163 [2024-12-10 12:40:41.892988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.893013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.902533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:37:35.163 [2024-12-10 12:40:41.903596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.903621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.912911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:37:35.163 [2024-12-10 12:40:41.913942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.913966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.924502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:35.163 [2024-12-10 12:40:41.925919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.925944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.933616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:35.163 [2024-12-10 12:40:41.934988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.935013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.942578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:35.163 [2024-12-10 12:40:41.943219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.943243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.953444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:35.163 [2024-12-10 12:40:41.954271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.954296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.964383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:35.163 [2024-12-10 12:40:41.965318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.965348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.975329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:35.163 [2024-12-10 12:40:41.976404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.976430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:35.163 [2024-12-10 12:40:41.986034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:35.163 [2024-12-10 12:40:41.987182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.163 [2024-12-10 12:40:41.987209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:41.997079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:35.423 [2024-12-10 12:40:41.998218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:41.998244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.008174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:35.423 [2024-12-10 12:40:42.009502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.009526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.018125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:37:35.423 [2024-12-10 12:40:42.019264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.019290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.028150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:37:35.423 [2024-12-10 12:40:42.029300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.029325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.039750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:35.423 [2024-12-10 12:40:42.041038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.041074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.050541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:35.423 [2024-12-10 12:40:42.051954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.051979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.060250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:37:35.423 [2024-12-10 12:40:42.061552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.061576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.069979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:37:35.423 [2024-12-10 12:40:42.071084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.071109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.080081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:35.423 [2024-12-10 12:40:42.080730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.080755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.091044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:35.423 [2024-12-10 12:40:42.091804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.091829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.103021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:35.423 [2024-12-10 12:40:42.104721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.104745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.112813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:35.423 [2024-12-10 12:40:42.114089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.114113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.123024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:35.423 [2024-12-10 12:40:42.124270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.124294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.133440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:37:35.423 [2024-12-10 12:40:42.134687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.134711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.143762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:37:35.423 [2024-12-10 12:40:42.145042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.145075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.154160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:35.423 [2024-12-10 12:40:42.155413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.155438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.163930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:35.423 [2024-12-10 12:40:42.165594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.165619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.174621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:35.423 [2024-12-10 12:40:42.175823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.175848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.185010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:35.423 [2024-12-10 12:40:42.186284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.186308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.195318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:35.423 [2024-12-10 12:40:42.196885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.196909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.205012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea248 00:37:35.423 [2024-12-10 12:40:42.205866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.205891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.216142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:37:35.423 [2024-12-10 12:40:42.217288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.217314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.227501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:37:35.423 [2024-12-10 12:40:42.228493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.228518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:35.423 [2024-12-10 12:40:42.237320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:35.423 [2024-12-10 12:40:42.238950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.423 [2024-12-10 12:40:42.238978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.248533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:35.682 [2024-12-10 12:40:42.249491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.249516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.259215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:35.682 [2024-12-10 12:40:42.260503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.260528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.269054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:35.682 [2024-12-10 12:40:42.270189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.270213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.278779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:37:35.682 [2024-12-10 12:40:42.279758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.279782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.291229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:35.682 [2024-12-10 12:40:42.292630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.292656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.301090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:37:35.682 [2024-12-10 12:40:42.302497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.302521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.309399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:35.682 [2024-12-10 12:40:42.310198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.310239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.321040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:37:35.682 [2024-12-10 12:40:42.322039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.322064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.331775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:35.682 [2024-12-10 12:40:42.332902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.332927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.341500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:35.682 [2024-12-10 12:40:42.342528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.342552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.352235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:37:35.682 [2024-12-10 12:40:42.353183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.353207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.363916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:37:35.682 [2024-12-10 12:40:42.365421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.365445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.372218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:35.682 [2024-12-10 12:40:42.373171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.373196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.383794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:35.682 [2024-12-10 12:40:42.384902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.384927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.394134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:35.682 [2024-12-10 12:40:42.395419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.395455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.404663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:35.682 [2024-12-10 12:40:42.405824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.405849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.415184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:35.682 [2024-12-10 12:40:42.416308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.416336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.425569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:37:35.682 [2024-12-10 12:40:42.426705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.426730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.435938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:35.682 [2024-12-10 12:40:42.437064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.437089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.446322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:37:35.682 [2024-12-10 12:40:42.447482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.447506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.456696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:37:35.682 [2024-12-10 12:40:42.457805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.457830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.467071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:37:35.682 [2024-12-10 12:40:42.468228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.468253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 24269.00 IOPS, 94.80 MiB/s [2024-12-10T11:40:42.508Z] [2024-12-10 12:40:42.478772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:35.682 [2024-12-10 12:40:42.480443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.480469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.489675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:37:35.682 [2024-12-10 12:40:42.491501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.491527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:35.682 [2024-12-10 12:40:42.498225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:35.682 [2024-12-10 12:40:42.499473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.682 [2024-12-10 12:40:42.499499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:35.941 [2024-12-10 12:40:42.509044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:37:35.941 [2024-12-10 12:40:42.509879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.941 [2024-12-10 12:40:42.509905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:35.941 [2024-12-10 12:40:42.520133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:35.941 [2024-12-10 12:40:42.521133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.941 [2024-12-10 12:40:42.521158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:35.941 [2024-12-10 12:40:42.530752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:37:35.941 [2024-12-10 12:40:42.531975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.941 [2024-12-10 12:40:42.532000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:35.941 [2024-12-10 12:40:42.541122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:35.941 [2024-12-10 12:40:42.542385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.941 [2024-12-10 12:40:42.542410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:35.941 [2024-12-10 12:40:42.551475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:35.941 [2024-12-10 12:40:42.552731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.941 [2024-12-10 12:40:42.552756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.561916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:35.942 [2024-12-10 12:40:42.563139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.563163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.571444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:37:35.942 [2024-12-10 12:40:42.572992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.573015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.581116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:35.942 [2024-12-10 12:40:42.581970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.581995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.591950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:35.942 [2024-12-10 12:40:42.592876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.592903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.601920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:35.942 [2024-12-10 12:40:42.602879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.602904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.612901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:35.942 [2024-12-10 12:40:42.614017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.614043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.623611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:37:35.942 [2024-12-10 12:40:42.624590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.624614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.635921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:35.942 [2024-12-10 12:40:42.637564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.637589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.646850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:35.942 [2024-12-10 12:40:42.648645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.648669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.654215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:35.942 [2024-12-10 12:40:42.654999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.655023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.664612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:35.942 [2024-12-10 12:40:42.665520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.665545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.675586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:35.942 [2024-12-10 12:40:42.676668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.676693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.687183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:35.942 [2024-12-10 12:40:42.688421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.688446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.697603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:35.942 [2024-12-10 12:40:42.698837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.698862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.707937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:35.942 [2024-12-10 12:40:42.709199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.709223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.718380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:37:35.942 [2024-12-10 12:40:42.719596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.719621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.728982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:37:35.942 [2024-12-10 12:40:42.730239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.730263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.739376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:35.942 [2024-12-10 12:40:42.740622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.740647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.750060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:37:35.942 [2024-12-10 12:40:42.751315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.751339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:35.942 [2024-12-10 12:40:42.760631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:35.942 [2024-12-10 12:40:42.761902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.942 [2024-12-10 12:40:42.761927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.771296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6020 00:37:36.201 [2024-12-10 12:40:42.772543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.772567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.781741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:36.201 [2024-12-10 12:40:42.782994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.783019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.792235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:36.201 [2024-12-10 12:40:42.793455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.793479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.801936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:36.201 [2024-12-10 12:40:42.803174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.803199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.812925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:36.201 [2024-12-10 12:40:42.814296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.814320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.823860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:36.201 [2024-12-10 12:40:42.825341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.825366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.833422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:36.201 [2024-12-10 12:40:42.834910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.834935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.842418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:36.201 [2024-12-10 12:40:42.843236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.843260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.853420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:36.201 [2024-12-10 12:40:42.854360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.854386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:36.201 [2024-12-10 12:40:42.864339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:36.201 [2024-12-10 12:40:42.865400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.201 [2024-12-10 12:40:42.865429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.874938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:36.202 [2024-12-10 12:40:42.875905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.875930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.887254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:36.202 [2024-12-10 12:40:42.888877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.888903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.898218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:37:36.202 [2024-12-10 12:40:42.900019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.900046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.905752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:36.202 [2024-12-10 12:40:42.906698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.906723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.916729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:36.202 [2024-12-10 12:40:42.917784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.917809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.927599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:36.202 [2024-12-10 12:40:42.928838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.928863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.938543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:36.202 [2024-12-10 12:40:42.939898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.939923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.949472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:36.202 [2024-12-10 12:40:42.950952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.950977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.958981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:36.202 [2024-12-10 12:40:42.960461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.960485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.967958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:36.202 [2024-12-10 12:40:42.968793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.968817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.980626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:36.202 [2024-12-10 12:40:42.981889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.981914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:42.990750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:37:36.202 [2024-12-10 12:40:42.991968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:42.991994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:43.001488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:36.202 [2024-12-10 12:40:43.002611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:43.002636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:43.012833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:36.202 [2024-12-10 12:40:43.014237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:43.014263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:36.202 [2024-12-10 12:40:43.023920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:37:36.202 [2024-12-10 12:40:43.025461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.202 [2024-12-10 12:40:43.025486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:36.461 [2024-12-10 12:40:43.032431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:36.461 [2024-12-10 12:40:43.033340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.461 [2024-12-10 12:40:43.033365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:36.461 [2024-12-10 12:40:43.043406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:36.461 [2024-12-10 12:40:43.044428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.044457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.055103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:36.462 [2024-12-10 12:40:43.056312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.056336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.064906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:36.462 [2024-12-10 12:40:43.066058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.066082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.075845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:36.462 [2024-12-10 12:40:43.077183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.077207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.086812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:36.462 [2024-12-10 12:40:43.088249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.088274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.097843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:36.462 [2024-12-10 12:40:43.099472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.099497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.108858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:36.462 [2024-12-10 12:40:43.110620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.110644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.116291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6020 00:37:36.462 [2024-12-10 12:40:43.117082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.117106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.128183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:36.462 [2024-12-10 12:40:43.129498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.129523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.139120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:36.462 [2024-12-10 12:40:43.140552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.140576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.150082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:36.462 [2024-12-10 12:40:43.151704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.151728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.161006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:36.462 [2024-12-10 12:40:43.162750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.162775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.168537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:36.462 [2024-12-10 12:40:43.169443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.169467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.179470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:37:36.462 [2024-12-10 12:40:43.180518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.180543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.189818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:36.462 [2024-12-10 12:40:43.190626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.190651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.200488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:36.462 [2024-12-10 12:40:43.201523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.201548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.213284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:36.462 [2024-12-10 12:40:43.214932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.214957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.220688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:36.462 [2024-12-10 12:40:43.221343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.221368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.234573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:36.462 [2024-12-10 12:40:43.236362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.236387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.242105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:36.462 [2024-12-10 12:40:43.243072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.243096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.255145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:36.462 [2024-12-10 12:40:43.256689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.256715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.262639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:36.462 [2024-12-10 12:40:43.263315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.263340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.273622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:36.462 [2024-12-10 12:40:43.274426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.462 [2024-12-10 12:40:43.274450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:36.462 [2024-12-10 12:40:43.285957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:36.722 [2024-12-10 12:40:43.287076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.287103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.296178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:36.722 [2024-12-10 12:40:43.297311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.297337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.306726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:37:36.722 [2024-12-10 12:40:43.307393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.307418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.317450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:36.722 [2024-12-10 12:40:43.318426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.318458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.327050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:36.722 [2024-12-10 12:40:43.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.328043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.338000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:36.722 [2024-12-10 12:40:43.339108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.339133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.348953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:36.722 [2024-12-10 12:40:43.350190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.350215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.360104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:36.722 [2024-12-10 12:40:43.361314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.361339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.368715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:36.722 [2024-12-10 12:40:43.369399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.369425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.378957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:36.722 [2024-12-10 12:40:43.379625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.722 [2024-12-10 12:40:43.379650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:36.722 [2024-12-10 12:40:43.389208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:36.722 [2024-12-10 12:40:43.389873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.389898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:36.723 [2024-12-10 12:40:43.400911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:36.723 [2024-12-10 12:40:43.402048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.402073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:36.723 [2024-12-10 12:40:43.411605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:36.723 [2024-12-10 12:40:43.412658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.412683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:36.723 [2024-12-10 12:40:43.422560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:36.723 [2024-12-10 12:40:43.423799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.423825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:36.723 [2024-12-10 12:40:43.433502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec408 00:37:36.723 [2024-12-10 12:40:43.434878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.434903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:36.723 [2024-12-10 12:40:43.444116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:36.723 [2024-12-10 12:40:43.445450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.445474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:36.723 [2024-12-10 12:40:43.454116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7818 00:37:36.723 [2024-12-10 12:40:43.455217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.455242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:36.723 [2024-12-10 12:40:43.463930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:36.723 [2024-12-10 12:40:43.464927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.464951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:36.723 24313.00 IOPS, 94.97 MiB/s [2024-12-10T11:40:43.549Z] [2024-12-10 12:40:43.473683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0a68 00:37:36.723 [2024-12-10 12:40:43.474449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.723 [2024-12-10 12:40:43.474473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:36.723 00:37:36.723 Latency(us) 00:37:36.723 [2024-12-10T11:40:43.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.723 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:36.723 nvme0n1 : 2.01 24308.69 94.96 0.00 0.00 5258.28 2449.80 14355.50 00:37:36.723 [2024-12-10T11:40:43.549Z] =================================================================================================================== 00:37:36.723 [2024-12-10T11:40:43.549Z] Total : 24308.69 94.96 0.00 0.00 5258.28 2449.80 14355.50 00:37:36.723 { 00:37:36.723 "results": [ 00:37:36.723 { 00:37:36.723 "job": "nvme0n1", 00:37:36.723 "core_mask": "0x2", 00:37:36.723 "workload": "randwrite", 00:37:36.723 "status": "finished", 00:37:36.723 "queue_depth": 128, 00:37:36.723 "io_size": 4096, 00:37:36.723 "runtime": 2.00562, 00:37:36.723 "iops": 24308.692573867433, 00:37:36.723 "mibps": 94.95583036666966, 00:37:36.723 "io_failed": 0, 00:37:36.723 "io_timeout": 0, 00:37:36.723 "avg_latency_us": 5258.282656915086, 00:37:36.723 "min_latency_us": 2449.7980952380954, 00:37:36.723 "max_latency_us": 14355.504761904762 00:37:36.723 } 00:37:36.723 ], 00:37:36.723 "core_count": 1 00:37:36.723 } 00:37:36.723 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:36.723 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:36.723 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:36.723 | .driver_specific 00:37:36.723 | .nvme_error 00:37:36.723 | .status_code 00:37:36.723 | .command_transient_transport_error' 00:37:36.723 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 191 > 0 )) 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3890895 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3890895 ']' 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3890895 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3890895 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3890895' 00:37:36.982 killing process with pid 3890895 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3890895 00:37:36.982 Received shutdown signal, test time was about 2.000000 seconds 00:37:36.982 00:37:36.982 Latency(us) 00:37:36.982 [2024-12-10T11:40:43.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.982 [2024-12-10T11:40:43.808Z] =================================================================================================================== 00:37:36.982 [2024-12-10T11:40:43.808Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.982 12:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3890895 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3891664 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3891664 /var/tmp/bperf.sock 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3891664 ']' 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:37.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.918 12:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:37.918 [2024-12-10 12:40:44.699491] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:37.918 [2024-12-10 12:40:44.699599] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891664 ] 00:37:37.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:37.918 Zero copy mechanism will not be used. 00:37:38.177 [2024-12-10 12:40:44.811392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.177 [2024-12-10 12:40:44.919665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.744 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.744 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:38.744 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:38.744 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:39.002 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:39.002 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.002 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.002 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.002 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:39.002 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:39.261 nvme0n1 00:37:39.261 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:39.261 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.261 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.261 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.261 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:39.261 12:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:39.261 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:39.261 Zero copy mechanism will not be used. 00:37:39.261 Running I/O for 2 seconds... 00:37:39.261 [2024-12-10 12:40:46.055006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.261 [2024-12-10 12:40:46.055114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.261 [2024-12-10 12:40:46.055153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.261 [2024-12-10 12:40:46.061400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.261 [2024-12-10 12:40:46.061478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.261 [2024-12-10 12:40:46.061509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.261 [2024-12-10 12:40:46.066824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.261 [2024-12-10 12:40:46.066902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.261 [2024-12-10 12:40:46.066930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.261 [2024-12-10 12:40:46.072140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.261 [2024-12-10 12:40:46.072249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.261 [2024-12-10 12:40:46.072275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.261 [2024-12-10 12:40:46.077468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.261 [2024-12-10 12:40:46.077552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.261 [2024-12-10 12:40:46.077578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.261 [2024-12-10 12:40:46.082742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.261 [2024-12-10 12:40:46.082814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.261 [2024-12-10 12:40:46.082841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.088143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.088227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.088252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.093476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.093564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.093589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.098764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.098835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.098860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.104099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.104182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.104207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.109351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.109445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.109469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.114600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.114672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.114697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.119786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.119874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.119899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.125054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.125135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.125160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.130288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.130381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.130423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.135514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.135578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.135603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.140874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.140939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.140964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.146261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.146377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.151492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.151607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.151632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.521 [2024-12-10 12:40:46.156989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.521 [2024-12-10 12:40:46.157084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.521 [2024-12-10 12:40:46.157108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.162725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.162859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.162885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.168915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.168985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.169009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.174854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.174990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.175016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.180947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.181073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.181099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.187068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.187136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.187161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.192847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.192968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.192996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.198783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.198916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.198942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.205241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.205360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.205384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.211336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.211423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.211460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.217035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.217121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.217146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.222956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.223024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.223048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.229098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.229188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.229212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.235965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.236035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.236059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.242157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.242253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.242277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.248064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.248141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.248172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.254139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.254221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.254246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.259962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.260029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.260052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.266298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.266363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.266387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.272353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.272453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.272478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.278259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.278332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.278355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.284262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.284391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.284417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.289593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.289683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.289707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.295152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.295289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.295314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.301003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.301068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.301096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.306845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.306960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.307001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.313222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.313291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.313316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.522 [2024-12-10 12:40:46.319454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.522 [2024-12-10 12:40:46.319558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.522 [2024-12-10 12:40:46.319582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.523 [2024-12-10 12:40:46.325236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.523 [2024-12-10 12:40:46.325307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.523 [2024-12-10 12:40:46.325332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.523 [2024-12-10 12:40:46.330590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.523 [2024-12-10 12:40:46.330688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.523 [2024-12-10 12:40:46.330713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.523 [2024-12-10 12:40:46.335889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.523 [2024-12-10 12:40:46.335975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.523 [2024-12-10 12:40:46.335999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.523 [2024-12-10 12:40:46.341193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.523 [2024-12-10 12:40:46.341303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.523 [2024-12-10 12:40:46.341327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.346541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.346673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.346699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.351869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.351979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.352004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.357030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.357126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.357151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.362380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.362485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.362509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.367694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.367813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.367838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.373468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.373601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.373627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.379156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.379248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.379272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.384419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.384540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.384565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.389699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.389827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.389852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.394976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.395098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.395126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.400295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.400370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.400393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.405544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.405619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.405643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.410785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.410916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.410941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.416052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.416160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.416190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.783 [2024-12-10 12:40:46.421363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.783 [2024-12-10 12:40:46.421444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-10 12:40:46.421469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.426725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.426808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.426832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.431996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.432078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.432103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.437213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.437285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.437309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.442624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.442698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.442723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.448482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.448550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.448573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.454676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.454749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.454774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.459922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.460002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.460026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.465226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.465295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.465319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.470448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.470527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.470552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.475720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.475859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.475884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.480878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.480956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.480980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.486105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.486231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.486259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.491728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.491867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.491907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.497698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.497769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.497792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.503515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.503601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.503625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.508902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.508989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.509013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.514162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.514284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.514308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.519444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.519566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.519594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.524715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.524793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.524817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.530054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.530129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.530153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.535385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.535484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.535509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.540637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.540760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.540785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.545785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.545865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.545889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.550957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.551024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.551048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.556266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.556424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.556449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.562042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.562198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.562222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.784 [2024-12-10 12:40:46.567976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.784 [2024-12-10 12:40:46.568043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-10 12:40:46.568067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.785 [2024-12-10 12:40:46.573709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.785 [2024-12-10 12:40:46.573838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-10 12:40:46.573863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.785 [2024-12-10 12:40:46.579539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.785 [2024-12-10 12:40:46.579608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-10 12:40:46.579632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:39.785 [2024-12-10 12:40:46.584820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.785 [2024-12-10 12:40:46.584970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-10 12:40:46.584997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:39.785 [2024-12-10 12:40:46.591514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.785 [2024-12-10 12:40:46.591577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-10 12:40:46.591601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:39.785 [2024-12-10 12:40:46.597491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.785 [2024-12-10 12:40:46.597576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-10 12:40:46.597601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:39.785 [2024-12-10 12:40:46.603029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:39.785 [2024-12-10 12:40:46.603104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-10 12:40:46.603128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.608311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.608380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.608403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.613756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.613836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.613860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.619193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.619276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.619300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.624775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.624859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.624883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.630175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.630248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.630276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.635686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.635769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.635793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.641204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.641277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.641301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.647459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.647545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.647569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.653502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.653586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.653610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.659480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.659614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.659639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.665608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.665677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.665701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.671151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.671243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.671267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.678554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.678636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.678660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.684385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.684480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.684504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.689925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.689998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.690022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.695541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.695617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.695641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.700924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.700997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.701020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.706416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.706492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.706515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.712055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.712148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.712179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.717693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.717770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.717795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.723232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.723317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.045 [2024-12-10 12:40:46.723340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.045 [2024-12-10 12:40:46.728628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.045 [2024-12-10 12:40:46.728744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.728773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.734573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.734728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.734753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.741098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.741239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.741263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.748225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.748346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.748370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.754748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.754892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.754916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.761238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.761377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.761402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.768358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.768510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.768535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.775026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.775175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.775200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.781524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.781661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.781686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.788573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.788713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.788739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.795189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.795335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.795360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.802028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.802188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.802213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.808807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.808967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.808992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.815651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.815792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.815817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.822904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.823047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.823072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.830920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.831057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.831083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.837358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.837427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.837452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.843015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.843157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.843189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.849138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.849272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.849297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.854733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.854804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.854828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.860115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.860194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.860218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.046 [2024-12-10 12:40:46.865472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.046 [2024-12-10 12:40:46.865541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.046 [2024-12-10 12:40:46.865565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.870927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.870999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.871032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.876366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.876437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.876460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.881794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.881885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.881909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.887309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.887381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.887406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.892654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.892738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.892762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.898055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.898121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.898146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.903374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.903460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.903484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.908587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.908661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.908685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.913782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.913862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.913887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.919034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.919102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.919125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.924281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.924437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.924462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.930101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.930179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.930204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.936028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.936125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.936149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.941856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.941928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.941952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.948036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.948118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.948142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.954593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.305 [2024-12-10 12:40:46.954714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.305 [2024-12-10 12:40:46.954743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.305 [2024-12-10 12:40:46.960918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:46.961018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:46.961042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:46.966977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:46.967061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:46.967086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:46.973066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:46.973146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:46.973177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:46.979118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:46.979190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:46.979213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:46.985713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:46.985829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:46.985854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:46.992032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:46.992108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:46.992137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:46.998425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:46.998492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:46.998517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.004515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.004607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.004631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.010451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.010524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.010549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.017123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.017233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.017258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.023550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.023639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.023664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.029424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.029511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.029536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.035640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.035710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.035734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.041668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.041752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.041776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.047560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.047690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.047716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.053583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.053650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.053674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.306 5364.00 IOPS, 670.50 MiB/s [2024-12-10T11:40:47.132Z] [2024-12-10 12:40:47.061012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.061097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.061123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.067005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.067146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.067179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.073197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.073286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.073311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.078848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.078937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.078961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.084352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.084421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.084446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.089854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.089941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.089965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.095492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.095569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.095598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.101117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.101235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.106659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.106732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.106757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.112236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.112320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.112344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.117879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.117972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.306 [2024-12-10 12:40:47.117997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.306 [2024-12-10 12:40:47.123527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.306 [2024-12-10 12:40:47.123608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.307 [2024-12-10 12:40:47.123633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.307 [2024-12-10 12:40:47.129006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.307 [2024-12-10 12:40:47.129071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.307 [2024-12-10 12:40:47.129096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.134417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.134488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.134514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.139824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.139904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.139928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.145261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.145343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.145368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.150592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.150663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.150687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.155970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.156062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.156086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.161332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.161413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.161436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.166708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.166775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.166799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.172063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.172143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.172175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.177405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.177481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.177506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.182623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.182694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.182718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.188019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.188105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.188133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.193400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.193466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.193491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.198865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.198945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.198970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.204164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.204268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.204291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.209636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.209724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.209748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.215066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.215137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.215161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.220790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.220959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.220983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.227282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.227411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.227437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.234248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.234382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.234407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.240764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.240917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.566 [2024-12-10 12:40:47.240951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.566 [2024-12-10 12:40:47.247426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.566 [2024-12-10 12:40:47.247579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.247604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.254054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.254226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.254251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.261240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.261394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.261420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.268554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.268681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.268707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.275375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.275521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.275546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.281781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.281962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.281988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.288436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.288564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.288590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.295094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.295233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.295258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.301636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.301788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.301813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.308353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.308512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.308538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.315382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.315498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.315522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.322009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.322139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.322163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.327748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.327844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.327868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.333630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.333828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.333855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.339424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.339492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.339517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.345137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.345270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.345296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.351787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.351934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.351963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.358435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.358588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.358613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.365216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.365361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.365387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.371912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.372062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.372087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.378440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.378573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.378599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.567 [2024-12-10 12:40:47.385095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.567 [2024-12-10 12:40:47.385238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.567 [2024-12-10 12:40:47.385263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.826 [2024-12-10 12:40:47.391783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.826 [2024-12-10 12:40:47.391913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.826 [2024-12-10 12:40:47.391938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.826 [2024-12-10 12:40:47.397576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.826 [2024-12-10 12:40:47.397661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.826 [2024-12-10 12:40:47.397685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.826 [2024-12-10 12:40:47.403565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.826 [2024-12-10 12:40:47.403698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.826 [2024-12-10 12:40:47.403723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.826 [2024-12-10 12:40:47.410124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.826 [2024-12-10 12:40:47.410196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.826 [2024-12-10 12:40:47.410236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.826 [2024-12-10 12:40:47.417240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.826 [2024-12-10 12:40:47.417363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.826 [2024-12-10 12:40:47.417387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.826 [2024-12-10 12:40:47.425367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.826 [2024-12-10 12:40:47.425471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.826 [2024-12-10 12:40:47.425495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.826 [2024-12-10 12:40:47.433104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.826 [2024-12-10 12:40:47.433250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.433275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.441347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.441465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.441489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.449086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.449214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.449238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.455337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.455485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.455510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.461405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.461556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.461582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.468004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.468109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.468140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.474728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.474865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.474891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.481856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.481983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.482007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.488535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.488616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.488640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.494662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.494738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.494762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.500598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.500665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.500689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.507129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.507199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.507223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.513410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.513500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.513535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.519611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.519691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.519715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.526325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.526416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.526440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.532113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.532254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.532279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.538071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.538142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.538172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.543487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.543561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.543585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.548685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.548753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.548777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.554107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.554188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.554228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.559431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.559566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.559591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.564816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.564936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.564961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.570183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.570319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.570345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.575409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.575480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.575505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.581053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.581133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.581157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.586712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.586789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.586813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.592159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.592240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.592264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.597703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.597776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.597800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.603655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.603723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.603748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.609765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.609841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.609866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.615532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.615618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.615643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.621206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.621293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.621318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.626818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.626910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.626935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.632405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.632480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.632504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.637875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.637948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.637972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.643317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.643406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.643439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.827 [2024-12-10 12:40:47.648767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.827 [2024-12-10 12:40:47.648837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.827 [2024-12-10 12:40:47.648861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.087 [2024-12-10 12:40:47.654332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.087 [2024-12-10 12:40:47.654400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.087 [2024-12-10 12:40:47.654425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.087 [2024-12-10 12:40:47.660260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.087 [2024-12-10 12:40:47.660333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.087 [2024-12-10 12:40:47.660357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.087 [2024-12-10 12:40:47.666596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.087 [2024-12-10 12:40:47.666676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.087 [2024-12-10 12:40:47.666700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.087 [2024-12-10 12:40:47.672761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.087 [2024-12-10 12:40:47.672841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.087 [2024-12-10 12:40:47.672866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.087 [2024-12-10 12:40:47.678882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.678948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.678972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.685024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.685108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.685133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.691286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.691427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.691452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.699429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.699577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.699603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.706859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.706952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.706977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.712698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.712839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.712865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.718551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.718703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.718729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.723884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.724094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.724124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.729594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.729751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.729776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.736176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.736326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.736352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.742854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.743016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.743040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.749460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.749596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.749621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.756081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.756240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.756266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.762724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.762868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.762893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.769215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.769352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.769377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.775764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.775903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.775932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.782662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.782800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.782825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.789127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.789295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.789321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.795662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.795819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.795845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.802352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.802498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.802523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.808950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.809092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.809117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.815716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.815851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.815876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.821715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.822122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.822148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.827633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.827998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.828023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.833520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.833902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.833932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.839676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.840039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.840065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.845773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.088 [2024-12-10 12:40:47.846135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.088 [2024-12-10 12:40:47.846160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.088 [2024-12-10 12:40:47.852600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.852961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.852986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.858939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.859337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.859363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.865459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.865817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.865841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.871559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.871912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.871936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.878413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.878863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.878888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.885139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.885510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.885536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.892566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.892889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.892914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.898768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.899096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.899121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.904381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.904707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.904732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.089 [2024-12-10 12:40:47.909900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.089 [2024-12-10 12:40:47.910216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.089 [2024-12-10 12:40:47.910241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.915017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.915355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.915379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.920323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.920666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.920690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.926509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.926866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.926892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.932292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.932620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.932645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.937797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.938149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.938181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.943350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.943673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.943699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.948977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.949325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.949351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.954197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.954554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.954580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.959231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.959549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.959575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.964086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.964432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.964457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.969199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.969546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.969572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.974086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.974386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.974412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.978749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.979014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.979039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.983162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.983459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.983489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.987628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.987891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.987916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.992344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.992620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.992645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:47.997959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:47.998242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:47.998267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.002562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:48.002844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:48.002869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.007112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:48.007411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:48.007446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.012216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:48.012467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:48.012492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.018102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:48.018478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:48.018503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.023586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:48.023906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:48.023959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.030485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:48.030774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:48.030800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.036865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:48.037144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:48.037175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.041642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.349 [2024-12-10 12:40:48.041924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.349 [2024-12-10 12:40:48.041950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.349 [2024-12-10 12:40:48.046074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.350 [2024-12-10 12:40:48.046367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.350 [2024-12-10 12:40:48.046392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.350 [2024-12-10 12:40:48.050614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.350 [2024-12-10 12:40:48.050877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.350 [2024-12-10 12:40:48.050902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.350 [2024-12-10 12:40:48.055174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.350 [2024-12-10 12:40:48.055431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.350 [2024-12-10 12:40:48.055456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.350 5274.50 IOPS, 659.31 MiB/s 00:37:41.350 Latency(us) 00:37:41.350 [2024-12-10T11:40:48.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.350 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:41.350 nvme0n1 : 2.00 5274.89 659.36 0.00 0.00 3028.05 1919.27 8488.47 00:37:41.350 [2024-12-10T11:40:48.176Z] =================================================================================================================== 00:37:41.350 [2024-12-10T11:40:48.176Z] Total : 5274.89 659.36 0.00 0.00 3028.05 1919.27 8488.47 00:37:41.350 { 00:37:41.350 "results": [ 00:37:41.350 { 00:37:41.350 "job": "nvme0n1", 00:37:41.350 "core_mask": "0x2", 00:37:41.350 "workload": "randwrite", 00:37:41.350 "status": "finished", 00:37:41.350 "queue_depth": 16, 00:37:41.350 "io_size": 131072, 00:37:41.350 "runtime": 2.003833, 00:37:41.350 "iops": 5274.89067202706, 00:37:41.350 "mibps": 659.3613340033825, 00:37:41.350 "io_failed": 0, 00:37:41.350 "io_timeout": 0, 00:37:41.350 "avg_latency_us": 3028.0450871739426, 00:37:41.350 "min_latency_us": 1919.2685714285715, 00:37:41.350 "max_latency_us": 8488.47238095238 00:37:41.350 } 00:37:41.350 ], 00:37:41.350 "core_count": 1 00:37:41.350 } 00:37:41.350 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:41.350 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:41.350 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:41.350 | .driver_specific 00:37:41.350 | .nvme_error 00:37:41.350 | .status_code 00:37:41.350 | .command_transient_transport_error' 00:37:41.350 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 341 > 0 )) 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3891664 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3891664 ']' 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3891664 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891664 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891664' 00:37:41.609 killing process with pid 3891664 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3891664 00:37:41.609 Received shutdown signal, test time was about 2.000000 seconds 00:37:41.609 00:37:41.609 Latency(us) 00:37:41.609 [2024-12-10T11:40:48.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.609 [2024-12-10T11:40:48.435Z] =================================================================================================================== 00:37:41.609 [2024-12-10T11:40:48.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:41.609 12:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3891664 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3889276 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3889276 ']' 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3889276 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3889276 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3889276' 00:37:42.545 killing process with pid 3889276 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3889276 00:37:42.545 12:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3889276 00:37:43.924 00:37:43.924 real 0m21.074s 00:37:43.924 user 0m39.567s 00:37:43.924 sys 0m4.817s 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:43.924 ************************************ 00:37:43.924 END TEST nvmf_digest_error 00:37:43.924 ************************************ 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:43.924 rmmod nvme_tcp 00:37:43.924 rmmod nvme_fabrics 00:37:43.924 rmmod nvme_keyring 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3889276 ']' 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3889276 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3889276 ']' 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3889276 00:37:43.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3889276) - No such process 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3889276 is not found' 00:37:43.924 Process with pid 3889276 is not found 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:43.924 12:40:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:45.829 00:37:45.829 real 0m51.446s 00:37:45.829 user 1m23.880s 00:37:45.829 sys 0m13.767s 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:45.829 ************************************ 00:37:45.829 END TEST nvmf_digest 00:37:45.829 ************************************ 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.829 ************************************ 00:37:45.829 START TEST nvmf_bdevperf 00:37:45.829 ************************************ 00:37:45.829 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:46.089 * Looking for test storage... 00:37:46.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:46.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.089 --rc genhtml_branch_coverage=1 00:37:46.089 --rc genhtml_function_coverage=1 00:37:46.089 --rc genhtml_legend=1 00:37:46.089 --rc geninfo_all_blocks=1 00:37:46.089 --rc geninfo_unexecuted_blocks=1 00:37:46.089 00:37:46.089 ' 00:37:46.089 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:46.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.089 --rc genhtml_branch_coverage=1 00:37:46.089 --rc genhtml_function_coverage=1 00:37:46.089 --rc genhtml_legend=1 00:37:46.089 --rc geninfo_all_blocks=1 00:37:46.089 --rc geninfo_unexecuted_blocks=1 00:37:46.089 00:37:46.089 ' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:46.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.090 --rc genhtml_branch_coverage=1 00:37:46.090 --rc genhtml_function_coverage=1 00:37:46.090 --rc genhtml_legend=1 00:37:46.090 --rc geninfo_all_blocks=1 00:37:46.090 --rc geninfo_unexecuted_blocks=1 00:37:46.090 00:37:46.090 ' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:46.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.090 --rc genhtml_branch_coverage=1 00:37:46.090 --rc genhtml_function_coverage=1 00:37:46.090 --rc genhtml_legend=1 00:37:46.090 --rc geninfo_all_blocks=1 00:37:46.090 --rc geninfo_unexecuted_blocks=1 00:37:46.090 00:37:46.090 ' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:46.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:46.090 12:40:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:51.361 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:51.361 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:51.361 Found net devices under 0000:af:00.0: cvl_0_0 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:51.361 Found net devices under 0000:af:00.1: cvl_0_1 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:51.361 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:51.362 12:40:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:51.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:51.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:37:51.362 00:37:51.362 --- 10.0.0.2 ping statistics --- 00:37:51.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.362 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:51.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:51.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:37:51.362 00:37:51.362 --- 10.0.0.1 ping statistics --- 00:37:51.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.362 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3895961 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3895961 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3895961 ']' 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:51.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:51.362 12:40:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:51.621 [2024-12-10 12:40:58.198259] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:51.621 [2024-12-10 12:40:58.198345] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:51.621 [2024-12-10 12:40:58.316195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:51.621 [2024-12-10 12:40:58.417695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:51.621 [2024-12-10 12:40:58.417737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:51.621 [2024-12-10 12:40:58.417746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:51.621 [2024-12-10 12:40:58.417756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:51.621 [2024-12-10 12:40:58.417764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:51.621 [2024-12-10 12:40:58.419860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:51.621 [2024-12-10 12:40:58.419926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.621 [2024-12-10 12:40:58.419936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:52.188 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:52.189 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:52.189 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:52.189 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:52.189 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.447 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:52.447 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:52.447 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.447 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.448 [2024-12-10 12:40:59.050697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.448 Malloc0 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.448 [2024-12-10 12:40:59.163056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:52.448 { 00:37:52.448 "params": { 00:37:52.448 "name": "Nvme$subsystem", 00:37:52.448 "trtype": "$TEST_TRANSPORT", 00:37:52.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:52.448 "adrfam": "ipv4", 00:37:52.448 "trsvcid": "$NVMF_PORT", 00:37:52.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:52.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:52.448 "hdgst": ${hdgst:-false}, 00:37:52.448 "ddgst": ${ddgst:-false} 00:37:52.448 }, 00:37:52.448 "method": "bdev_nvme_attach_controller" 00:37:52.448 } 00:37:52.448 EOF 00:37:52.448 )") 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:52.448 12:40:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:52.448 "params": { 00:37:52.448 "name": "Nvme1", 00:37:52.448 "trtype": "tcp", 00:37:52.448 "traddr": "10.0.0.2", 00:37:52.448 "adrfam": "ipv4", 00:37:52.448 "trsvcid": "4420", 00:37:52.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:52.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:52.448 "hdgst": false, 00:37:52.448 "ddgst": false 00:37:52.448 }, 00:37:52.448 "method": "bdev_nvme_attach_controller" 00:37:52.448 }' 00:37:52.448 [2024-12-10 12:40:59.242877] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:52.448 [2024-12-10 12:40:59.242965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896180 ] 00:37:52.707 [2024-12-10 12:40:59.356115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.707 [2024-12-10 12:40:59.469422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.275 Running I/O for 1 seconds... 00:37:54.212 9768.00 IOPS, 38.16 MiB/s 00:37:54.212 Latency(us) 00:37:54.212 [2024-12-10T11:41:01.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.212 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:54.212 Verification LBA range: start 0x0 length 0x4000 00:37:54.212 Nvme1n1 : 1.01 9850.04 38.48 0.00 0.00 12925.80 2418.59 11921.31 00:37:54.212 [2024-12-10T11:41:01.038Z] =================================================================================================================== 00:37:54.212 [2024-12-10T11:41:01.038Z] Total : 9850.04 38.48 0.00 0.00 12925.80 2418.59 11921.31 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3896520 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:55.149 { 00:37:55.149 "params": { 00:37:55.149 "name": "Nvme$subsystem", 00:37:55.149 "trtype": "$TEST_TRANSPORT", 00:37:55.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:55.149 "adrfam": "ipv4", 00:37:55.149 "trsvcid": "$NVMF_PORT", 00:37:55.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:55.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:55.149 "hdgst": ${hdgst:-false}, 00:37:55.149 "ddgst": ${ddgst:-false} 00:37:55.149 }, 00:37:55.149 "method": "bdev_nvme_attach_controller" 00:37:55.149 } 00:37:55.149 EOF 00:37:55.149 )") 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:55.149 12:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:55.149 "params": { 00:37:55.149 "name": "Nvme1", 00:37:55.149 "trtype": "tcp", 00:37:55.149 "traddr": "10.0.0.2", 00:37:55.149 "adrfam": "ipv4", 00:37:55.149 "trsvcid": "4420", 00:37:55.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:55.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:55.149 "hdgst": false, 00:37:55.149 "ddgst": false 00:37:55.149 }, 00:37:55.149 "method": "bdev_nvme_attach_controller" 00:37:55.149 }' 00:37:55.149 [2024-12-10 12:41:01.886283] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:37:55.149 [2024-12-10 12:41:01.886368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896520 ] 00:37:55.408 [2024-12-10 12:41:02.003252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.408 [2024-12-10 12:41:02.114760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.976 Running I/O for 15 seconds... 00:37:58.294 9764.00 IOPS, 38.14 MiB/s [2024-12-10T11:41:05.120Z] 9817.00 IOPS, 38.35 MiB/s [2024-12-10T11:41:05.120Z] 12:41:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3895961 00:37:58.294 12:41:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:58.294 [2024-12-10 12:41:04.838513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:58.294 [2024-12-10 12:41:04.838566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.294 [2024-12-10 12:41:04.838596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:58.294 [2024-12-10 12:41:04.838608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.294 [2024-12-10 12:41:04.838622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:58.294 [2024-12-10 12:41:04.838632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.294 [2024-12-10 12:41:04.838645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:58.294 [2024-12-10 12:41:04.838656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.294 [2024-12-10 12:41:04.838669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:58.294 [2024-12-10 12:41:04.838679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.294 [2024-12-10 12:41:04.838691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:58.294 [2024-12-10 12:41:04.838702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.294 [2024-12-10 12:41:04.838714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.294 [2024-12-10 12:41:04.838730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.294 [2024-12-10 12:41:04.838741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.838988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.838999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:58.295 [2024-12-10 12:41:04.839241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.295 [2024-12-10 12:41:04.839611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.295 [2024-12-10 12:41:04.839622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.839988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.839997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.296 [2024-12-10 12:41:04.840459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.296 [2024-12-10 12:41:04.840470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:58.297 [2024-12-10 12:41:04.840935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.840987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.840996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.297 [2024-12-10 12:41:04.841247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.841257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:37:58.297 [2024-12-10 12:41:04.841270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:58.297 [2024-12-10 12:41:04.841282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:58.297 [2024-12-10 12:41:04.841292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36096 len:8 PRP1 0x0 PRP2 0x0 00:37:58.297 [2024-12-10 12:41:04.841303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:58.297 [2024-12-10 12:41:04.844729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.844810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.845366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.845391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.845404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.845604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.845802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.845818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.845830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.845842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.858340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.858718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.858742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.858752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.858943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.859133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.859144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.859153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.859162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.871395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.871826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.871848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.871858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.872050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.872250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.872262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.872271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.872280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.884549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.884878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.884900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.884910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.885098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.885297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.885308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.885317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.885329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.897772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.898147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.898176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.898187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.898378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.898568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.898578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.898587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.898596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.910943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.911422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.911482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.911516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.912185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.912702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.912713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.912721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.912730] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.924071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.924495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.924517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.924527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.924720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.924910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.924922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.924930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.924939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.937089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.937579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.937647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.937679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.938343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.938591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.938602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.938610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.938619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.950342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.950810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.950867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.950899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.951563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.951914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.951925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.951933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.951942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.963417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.298 [2024-12-10 12:41:04.963914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.298 [2024-12-10 12:41:04.963973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.298 [2024-12-10 12:41:04.964005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.298 [2024-12-10 12:41:04.964545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.298 [2024-12-10 12:41:04.964735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.298 [2024-12-10 12:41:04.964745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.298 [2024-12-10 12:41:04.964754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.298 [2024-12-10 12:41:04.964763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.298 [2024-12-10 12:41:04.976525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:04.976971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:04.976992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:04.977007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:04.977194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:04.977399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.299 [2024-12-10 12:41:04.977410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.299 [2024-12-10 12:41:04.977418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.299 [2024-12-10 12:41:04.977427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.299 [2024-12-10 12:41:04.989644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:04.990106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:04.990127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:04.990137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:04.990330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:04.990525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.299 [2024-12-10 12:41:04.990536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.299 [2024-12-10 12:41:04.990544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.299 [2024-12-10 12:41:04.990553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.299 [2024-12-10 12:41:05.002723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:05.003152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:05.003177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:05.003188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:05.003367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:05.003546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.299 [2024-12-10 12:41:05.003556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.299 [2024-12-10 12:41:05.003565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.299 [2024-12-10 12:41:05.003573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.299 [2024-12-10 12:41:05.015783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:05.016139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:05.016160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:05.016176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:05.016380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:05.016573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.299 [2024-12-10 12:41:05.016584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.299 [2024-12-10 12:41:05.016593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.299 [2024-12-10 12:41:05.016602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.299 [2024-12-10 12:41:05.028821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:05.029302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:05.029361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:05.029394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:05.029964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:05.030143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.299 [2024-12-10 12:41:05.030154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.299 [2024-12-10 12:41:05.030162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.299 [2024-12-10 12:41:05.030177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.299 [2024-12-10 12:41:05.041995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:05.042465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:05.042487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:05.042497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:05.042686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:05.042875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.299 [2024-12-10 12:41:05.042886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.299 [2024-12-10 12:41:05.042895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.299 [2024-12-10 12:41:05.042903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.299 [2024-12-10 12:41:05.055021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:05.055503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:05.055576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:05.055608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:05.056058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:05.056253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.299 [2024-12-10 12:41:05.056265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.299 [2024-12-10 12:41:05.056277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.299 [2024-12-10 12:41:05.056285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.299 [2024-12-10 12:41:05.068080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:05.068559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:05.068580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:05.068590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:05.068779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:05.068966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.299 [2024-12-10 12:41:05.068977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.299 [2024-12-10 12:41:05.068986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.299 [2024-12-10 12:41:05.068995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.299 [2024-12-10 12:41:05.081115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.299 [2024-12-10 12:41:05.081565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.299 [2024-12-10 12:41:05.081631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.299 [2024-12-10 12:41:05.081664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.299 [2024-12-10 12:41:05.082265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.299 [2024-12-10 12:41:05.082454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.300 [2024-12-10 12:41:05.082465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.300 [2024-12-10 12:41:05.082473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.300 [2024-12-10 12:41:05.082482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.300 [2024-12-10 12:41:05.094290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.300 [2024-12-10 12:41:05.094760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.300 [2024-12-10 12:41:05.094782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.300 [2024-12-10 12:41:05.094792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.300 [2024-12-10 12:41:05.094986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.300 [2024-12-10 12:41:05.095187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.300 [2024-12-10 12:41:05.095199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.300 [2024-12-10 12:41:05.095208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.300 [2024-12-10 12:41:05.095217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.300 [2024-12-10 12:41:05.107717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.300 [2024-12-10 12:41:05.108115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.300 [2024-12-10 12:41:05.108139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.300 [2024-12-10 12:41:05.108149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.300 [2024-12-10 12:41:05.108349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.300 [2024-12-10 12:41:05.108546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.300 [2024-12-10 12:41:05.108558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.300 [2024-12-10 12:41:05.108568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.300 [2024-12-10 12:41:05.108577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.121192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.121583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.121605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.121615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.121809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.122004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.122015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.560 [2024-12-10 12:41:05.122025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.560 [2024-12-10 12:41:05.122033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.134518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.134903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.134924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.134934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.135124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.135318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.135330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.560 [2024-12-10 12:41:05.135339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.560 [2024-12-10 12:41:05.135348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.147615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.148056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.148081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.148091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.148287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.148478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.148489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.560 [2024-12-10 12:41:05.148498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.560 [2024-12-10 12:41:05.148506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.160652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.161138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.161207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.161240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.161889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.162363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.162381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.560 [2024-12-10 12:41:05.162395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.560 [2024-12-10 12:41:05.162409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.174762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.175223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.175247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.175259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.175466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.175672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.175684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.560 [2024-12-10 12:41:05.175693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.560 [2024-12-10 12:41:05.175702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.187833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.188209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.188268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.188301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.188960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.189427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.189438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.560 [2024-12-10 12:41:05.189447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.560 [2024-12-10 12:41:05.189456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.200969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.201440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.201500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.201533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.202174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.202416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.202433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.560 [2024-12-10 12:41:05.202448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.560 [2024-12-10 12:41:05.202462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.215039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.215546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.215605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.215637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.216233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.216440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.216452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.560 [2024-12-10 12:41:05.216461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.560 [2024-12-10 12:41:05.216470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.560 [2024-12-10 12:41:05.228163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.560 [2024-12-10 12:41:05.228609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.560 [2024-12-10 12:41:05.228645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.560 [2024-12-10 12:41:05.228678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.560 [2024-12-10 12:41:05.229252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.560 [2024-12-10 12:41:05.229445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.560 [2024-12-10 12:41:05.229456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.229465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.229474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.241258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.241713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.241734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.241743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.241928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.242107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.242117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.242125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.242133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.254293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.254681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.254702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.254712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.254901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.255090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.255101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.255110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.255118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.267449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.267928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.267987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.268020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.268523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.268712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.268723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.268735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.268744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.280542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.281017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.281039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.281048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.281244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.281434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.281444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.281453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.281461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.293605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.294056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.294077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.294087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.294293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.294482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.294493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.294502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.294511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.306660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.307012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.307032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.307041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.307244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.307434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.307445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.307454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.307462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.319760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.320136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.320156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.320171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.320376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.320565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.320576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.320585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.320593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.332874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.333317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.333378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.333411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.334062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.334688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.334698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.334706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.334715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.345924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.346411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.346432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.346443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.346638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.346832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.346843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.346852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.561 [2024-12-10 12:41:05.346860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.561 [2024-12-10 12:41:05.359324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.561 [2024-12-10 12:41:05.359656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.561 [2024-12-10 12:41:05.359681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.561 [2024-12-10 12:41:05.359691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.561 [2024-12-10 12:41:05.359879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.561 [2024-12-10 12:41:05.360068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.561 [2024-12-10 12:41:05.360079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.561 [2024-12-10 12:41:05.360087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.562 [2024-12-10 12:41:05.360096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.562 [2024-12-10 12:41:05.372649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.562 [2024-12-10 12:41:05.373063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.562 [2024-12-10 12:41:05.373084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.562 [2024-12-10 12:41:05.373093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.562 [2024-12-10 12:41:05.373288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.562 [2024-12-10 12:41:05.373477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.562 [2024-12-10 12:41:05.373488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.562 [2024-12-10 12:41:05.373497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.562 [2024-12-10 12:41:05.373505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.822 [2024-12-10 12:41:05.385986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.822 [2024-12-10 12:41:05.386444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.822 [2024-12-10 12:41:05.386465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.822 [2024-12-10 12:41:05.386476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.822 [2024-12-10 12:41:05.386664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.822 [2024-12-10 12:41:05.386852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.822 [2024-12-10 12:41:05.386863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.822 [2024-12-10 12:41:05.386872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.822 [2024-12-10 12:41:05.386880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.822 [2024-12-10 12:41:05.399155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.822 [2024-12-10 12:41:05.399612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.822 [2024-12-10 12:41:05.399669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.822 [2024-12-10 12:41:05.399702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.822 [2024-12-10 12:41:05.400119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.822 [2024-12-10 12:41:05.400313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.822 [2024-12-10 12:41:05.400325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.822 [2024-12-10 12:41:05.400333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.822 [2024-12-10 12:41:05.400342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.822 [2024-12-10 12:41:05.412277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.822 [2024-12-10 12:41:05.412728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.822 [2024-12-10 12:41:05.412749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.822 [2024-12-10 12:41:05.412759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.822 [2024-12-10 12:41:05.412937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.822 [2024-12-10 12:41:05.413116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.822 [2024-12-10 12:41:05.413127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.822 [2024-12-10 12:41:05.413135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.822 [2024-12-10 12:41:05.413143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.822 [2024-12-10 12:41:05.425445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.822 [2024-12-10 12:41:05.425920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.822 [2024-12-10 12:41:05.425977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.822 [2024-12-10 12:41:05.426009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.822 [2024-12-10 12:41:05.426603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.822 [2024-12-10 12:41:05.426792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.822 [2024-12-10 12:41:05.426825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.822 [2024-12-10 12:41:05.426834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.822 [2024-12-10 12:41:05.426843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.822 [2024-12-10 12:41:05.438495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.822 [2024-12-10 12:41:05.438939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.822 [2024-12-10 12:41:05.438959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.822 [2024-12-10 12:41:05.438969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.822 [2024-12-10 12:41:05.439148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.822 [2024-12-10 12:41:05.439358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.822 [2024-12-10 12:41:05.439373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.822 [2024-12-10 12:41:05.439382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.822 [2024-12-10 12:41:05.439390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.822 [2024-12-10 12:41:05.451606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.822 [2024-12-10 12:41:05.451964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.822 [2024-12-10 12:41:05.452020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.822 [2024-12-10 12:41:05.452053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.822 [2024-12-10 12:41:05.452715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.822 [2024-12-10 12:41:05.453157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.822 [2024-12-10 12:41:05.453172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.822 [2024-12-10 12:41:05.453182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.822 [2024-12-10 12:41:05.453191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.822 [2024-12-10 12:41:05.464668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.822 [2024-12-10 12:41:05.465129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.822 [2024-12-10 12:41:05.465188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.822 [2024-12-10 12:41:05.465224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.822 [2024-12-10 12:41:05.465869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.822 [2024-12-10 12:41:05.466059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.822 [2024-12-10 12:41:05.466070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.822 [2024-12-10 12:41:05.466078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.822 [2024-12-10 12:41:05.466087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.822 [2024-12-10 12:41:05.477723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.822 [2024-12-10 12:41:05.478175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.822 [2024-12-10 12:41:05.478197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.478207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.478395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.478583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.478594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.478603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.478615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.490748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.491202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.491224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.491234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.491427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.491605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.491616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.491624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.491632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.503912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.504315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.504336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.504346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.504526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.504704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.504715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.504724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.504732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.517020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.517498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.517519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.517529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.517718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.517906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.517917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.517926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.517935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.530060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.530558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.530617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.530649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.531187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.531376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.531387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.531396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.531404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.543189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.543558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.543579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.543588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.543766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.543944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.543954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.543962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.543970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.556351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.556803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.556854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.556886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.557442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.557631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.557642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.557651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.557659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.569464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.569884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.569905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.569917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.570095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.570300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.570312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.570321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.570329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.582615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.583057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.583077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.583086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.583292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.583481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.583492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.583500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.583509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.595661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.596109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.596130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.596140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.823 [2024-12-10 12:41:05.596354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.823 [2024-12-10 12:41:05.596547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.823 [2024-12-10 12:41:05.596559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.823 [2024-12-10 12:41:05.596568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.823 [2024-12-10 12:41:05.596576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.823 [2024-12-10 12:41:05.609031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.823 [2024-12-10 12:41:05.609479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.823 [2024-12-10 12:41:05.609501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.823 [2024-12-10 12:41:05.609511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.824 [2024-12-10 12:41:05.609705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.824 [2024-12-10 12:41:05.609903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.824 [2024-12-10 12:41:05.609914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.824 [2024-12-10 12:41:05.609923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.824 [2024-12-10 12:41:05.609938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.824 [2024-12-10 12:41:05.622276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.824 [2024-12-10 12:41:05.622734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.824 [2024-12-10 12:41:05.622755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.824 [2024-12-10 12:41:05.622765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.824 [2024-12-10 12:41:05.622954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.824 [2024-12-10 12:41:05.623143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.824 [2024-12-10 12:41:05.623154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.824 [2024-12-10 12:41:05.623162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.824 [2024-12-10 12:41:05.623178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:58.824 [2024-12-10 12:41:05.635328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:58.824 [2024-12-10 12:41:05.635799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:58.824 [2024-12-10 12:41:05.635820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:58.824 [2024-12-10 12:41:05.635830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:58.824 [2024-12-10 12:41:05.636018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:58.824 [2024-12-10 12:41:05.636213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:58.824 [2024-12-10 12:41:05.636225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:58.824 [2024-12-10 12:41:05.636233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:58.824 [2024-12-10 12:41:05.636242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.084 [2024-12-10 12:41:05.648664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.084 [2024-12-10 12:41:05.649108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.084 [2024-12-10 12:41:05.649189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.084 [2024-12-10 12:41:05.649223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.084 [2024-12-10 12:41:05.649872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.084 [2024-12-10 12:41:05.650494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.084 [2024-12-10 12:41:05.650506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.084 [2024-12-10 12:41:05.650518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.084 [2024-12-10 12:41:05.650527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.084 [2024-12-10 12:41:05.661772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.084 [2024-12-10 12:41:05.662230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.084 [2024-12-10 12:41:05.662251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.084 [2024-12-10 12:41:05.662261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.084 [2024-12-10 12:41:05.662441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.084 [2024-12-10 12:41:05.662619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.084 [2024-12-10 12:41:05.662629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.084 [2024-12-10 12:41:05.662638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.084 [2024-12-10 12:41:05.662646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.084 [2024-12-10 12:41:05.674853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.084 [2024-12-10 12:41:05.675326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.084 [2024-12-10 12:41:05.675380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.084 [2024-12-10 12:41:05.675415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.084 [2024-12-10 12:41:05.676064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.084 [2024-12-10 12:41:05.676508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.084 [2024-12-10 12:41:05.676519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.084 [2024-12-10 12:41:05.676528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.084 [2024-12-10 12:41:05.676537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.084 [2024-12-10 12:41:05.688001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.084 [2024-12-10 12:41:05.688492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.084 [2024-12-10 12:41:05.688550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.084 [2024-12-10 12:41:05.688583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.084 [2024-12-10 12:41:05.689249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.084 [2024-12-10 12:41:05.689725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.689736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.689745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.689756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 [2024-12-10 12:41:05.701079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.701465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.701487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.701497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.701685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.701874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.701885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.701893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.701902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 6925.67 IOPS, 27.05 MiB/s [2024-12-10T11:41:05.911Z] [2024-12-10 12:41:05.714874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.715342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.715404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.715437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.715814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.715992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.716002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.716011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.716019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 [2024-12-10 12:41:05.727982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.728438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.728497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.728529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.728916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.729105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.729115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.729124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.729132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 [2024-12-10 12:41:05.741085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.741471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.741531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.741563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.742228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.742731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.742741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.742750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.742759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 [2024-12-10 12:41:05.754145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.754621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.754642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.754652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.754841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.755030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.755040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.755049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.755058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 [2024-12-10 12:41:05.767187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.767649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.767670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.767680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.767869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.768058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.768069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.768077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.768086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 [2024-12-10 12:41:05.780215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.780659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.780718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.780757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.781426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.781814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.781825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.781833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.781842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 [2024-12-10 12:41:05.793319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.793789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.793809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.793819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.794008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.794204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.794215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.794224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.085 [2024-12-10 12:41:05.794232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.085 [2024-12-10 12:41:05.806492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.085 [2024-12-10 12:41:05.806985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.085 [2024-12-10 12:41:05.807044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.085 [2024-12-10 12:41:05.807077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.085 [2024-12-10 12:41:05.807654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.085 [2024-12-10 12:41:05.807843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.085 [2024-12-10 12:41:05.807854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.085 [2024-12-10 12:41:05.807863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.086 [2024-12-10 12:41:05.807871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.086 [2024-12-10 12:41:05.819594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.086 [2024-12-10 12:41:05.819976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.086 [2024-12-10 12:41:05.819996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.086 [2024-12-10 12:41:05.820006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.086 [2024-12-10 12:41:05.820200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.086 [2024-12-10 12:41:05.820393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.086 [2024-12-10 12:41:05.820403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.086 [2024-12-10 12:41:05.820412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.086 [2024-12-10 12:41:05.820427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.086 [2024-12-10 12:41:05.832742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.086 [2024-12-10 12:41:05.833099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.086 [2024-12-10 12:41:05.833119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.086 [2024-12-10 12:41:05.833128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.086 [2024-12-10 12:41:05.833335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.086 [2024-12-10 12:41:05.833524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.086 [2024-12-10 12:41:05.833535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.086 [2024-12-10 12:41:05.833544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.086 [2024-12-10 12:41:05.833553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.086 [2024-12-10 12:41:05.845833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.086 [2024-12-10 12:41:05.846195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.086 [2024-12-10 12:41:05.846232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.086 [2024-12-10 12:41:05.846242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.086 [2024-12-10 12:41:05.846431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.086 [2024-12-10 12:41:05.846637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.086 [2024-12-10 12:41:05.846649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.086 [2024-12-10 12:41:05.846658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.086 [2024-12-10 12:41:05.846666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.086 [2024-12-10 12:41:05.859294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.086 [2024-12-10 12:41:05.859768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.086 [2024-12-10 12:41:05.859827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.086 [2024-12-10 12:41:05.859860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.086 [2024-12-10 12:41:05.860239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.086 [2024-12-10 12:41:05.860552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.086 [2024-12-10 12:41:05.860570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.086 [2024-12-10 12:41:05.860589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.086 [2024-12-10 12:41:05.860602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.086 [2024-12-10 12:41:05.873219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.086 [2024-12-10 12:41:05.873720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.086 [2024-12-10 12:41:05.873779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.086 [2024-12-10 12:41:05.873811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.086 [2024-12-10 12:41:05.874233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.086 [2024-12-10 12:41:05.874440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.086 [2024-12-10 12:41:05.874452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.086 [2024-12-10 12:41:05.874461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.086 [2024-12-10 12:41:05.874471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.086 [2024-12-10 12:41:05.886256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.086 [2024-12-10 12:41:05.886723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.086 [2024-12-10 12:41:05.886745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.086 [2024-12-10 12:41:05.886755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.086 [2024-12-10 12:41:05.886944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.086 [2024-12-10 12:41:05.887132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.086 [2024-12-10 12:41:05.887143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.086 [2024-12-10 12:41:05.887151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.086 [2024-12-10 12:41:05.887160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.086 [2024-12-10 12:41:05.899285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.086 [2024-12-10 12:41:05.899693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.086 [2024-12-10 12:41:05.899713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.086 [2024-12-10 12:41:05.899723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.086 [2024-12-10 12:41:05.899911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.086 [2024-12-10 12:41:05.900090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.086 [2024-12-10 12:41:05.900100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.086 [2024-12-10 12:41:05.900108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.086 [2024-12-10 12:41:05.900117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:05.912464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:05.912915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:05.912936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:05.912946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:05.913135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:05.913349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:05.913361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:05.913370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:05.913379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:05.925503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:05.925964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:05.926022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:05.926055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:05.926719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:05.927117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:05.927128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:05.927137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:05.927146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:05.938615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:05.939061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:05.939081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:05.939092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:05.939287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:05.939482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:05.939493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:05.939502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:05.939510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:05.951845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:05.952293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:05.952362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:05.952395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:05.953048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:05.953241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:05.953253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:05.953262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:05.953270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:05.964904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:05.965227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:05.965287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:05.965319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:05.965970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:05.966413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:05.966425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:05.966433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:05.966442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:05.978307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:05.978666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:05.978688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:05.978698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:05.978891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:05.979085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:05.979096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:05.979105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:05.979114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:05.991375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:05.991744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:05.991794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:05.991843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:05.992409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:05.992599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:05.992610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:05.992619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:05.992627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:06.004497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:06.004915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:06.004936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:06.004946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:06.005134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:06.005327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:06.005339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:06.005348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:06.005356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:06.017657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:06.018058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:06.018117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.347 [2024-12-10 12:41:06.018149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.347 [2024-12-10 12:41:06.018815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.347 [2024-12-10 12:41:06.019390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.347 [2024-12-10 12:41:06.019402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.347 [2024-12-10 12:41:06.019411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.347 [2024-12-10 12:41:06.019420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.347 [2024-12-10 12:41:06.030792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.347 [2024-12-10 12:41:06.031107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.347 [2024-12-10 12:41:06.031128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.031137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.031333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.031526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.031537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.031546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.031554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.043845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.044230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.044253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.044263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.044453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.044642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.044653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.044662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.044670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.056889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.057241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.057263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.057273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.057463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.057651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.057662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.057671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.057679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.069978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.070432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.070453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.070463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.070650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.070839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.070849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.070862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.070871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.083000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.083289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.083310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.083319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.083499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.083678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.083689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.083697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.083705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.096183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.096491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.096512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.096522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.096729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.096922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.096934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.096943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.096952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.109576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.109947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.109968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.109978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.110178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.110373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.110385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.110393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.110403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.122892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.123244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.123267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.123277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.123471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.123665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.123677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.123686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.123695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.136209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.136609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.136629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.136639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.136828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.137016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.137027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.137036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.137044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.149365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.149746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.149767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.348 [2024-12-10 12:41:06.149778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.348 [2024-12-10 12:41:06.149966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.348 [2024-12-10 12:41:06.150155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.348 [2024-12-10 12:41:06.150173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.348 [2024-12-10 12:41:06.150183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.348 [2024-12-10 12:41:06.150191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.348 [2024-12-10 12:41:06.162677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.348 [2024-12-10 12:41:06.163117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.348 [2024-12-10 12:41:06.163140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.349 [2024-12-10 12:41:06.163154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.349 [2024-12-10 12:41:06.163356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.349 [2024-12-10 12:41:06.163554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.349 [2024-12-10 12:41:06.163567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.349 [2024-12-10 12:41:06.163576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.349 [2024-12-10 12:41:06.163585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.175961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.609 [2024-12-10 12:41:06.176355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.609 [2024-12-10 12:41:06.176406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.609 [2024-12-10 12:41:06.176440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.609 [2024-12-10 12:41:06.177090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.609 [2024-12-10 12:41:06.177688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.609 [2024-12-10 12:41:06.177701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.609 [2024-12-10 12:41:06.177710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.609 [2024-12-10 12:41:06.177719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.188997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.609 [2024-12-10 12:41:06.189482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.609 [2024-12-10 12:41:06.189541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.609 [2024-12-10 12:41:06.189573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.609 [2024-12-10 12:41:06.190237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.609 [2024-12-10 12:41:06.190737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.609 [2024-12-10 12:41:06.190748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.609 [2024-12-10 12:41:06.190757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.609 [2024-12-10 12:41:06.190766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.202176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.609 [2024-12-10 12:41:06.202558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.609 [2024-12-10 12:41:06.202617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.609 [2024-12-10 12:41:06.202650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.609 [2024-12-10 12:41:06.203322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.609 [2024-12-10 12:41:06.203858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.609 [2024-12-10 12:41:06.203869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.609 [2024-12-10 12:41:06.203877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.609 [2024-12-10 12:41:06.203886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.216574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.609 [2024-12-10 12:41:06.217038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.609 [2024-12-10 12:41:06.217060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.609 [2024-12-10 12:41:06.217070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.609 [2024-12-10 12:41:06.217282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.609 [2024-12-10 12:41:06.217489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.609 [2024-12-10 12:41:06.217501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.609 [2024-12-10 12:41:06.217511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.609 [2024-12-10 12:41:06.217520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.229657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.609 [2024-12-10 12:41:06.230060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.609 [2024-12-10 12:41:06.230081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.609 [2024-12-10 12:41:06.230091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.609 [2024-12-10 12:41:06.230290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.609 [2024-12-10 12:41:06.230490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.609 [2024-12-10 12:41:06.230501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.609 [2024-12-10 12:41:06.230509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.609 [2024-12-10 12:41:06.230517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.242742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.609 [2024-12-10 12:41:06.243212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.609 [2024-12-10 12:41:06.243234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.609 [2024-12-10 12:41:06.243244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.609 [2024-12-10 12:41:06.243437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.609 [2024-12-10 12:41:06.243626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.609 [2024-12-10 12:41:06.243640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.609 [2024-12-10 12:41:06.243649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.609 [2024-12-10 12:41:06.243657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.255903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.609 [2024-12-10 12:41:06.256377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.609 [2024-12-10 12:41:06.256398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.609 [2024-12-10 12:41:06.256407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.609 [2024-12-10 12:41:06.256586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.609 [2024-12-10 12:41:06.256765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.609 [2024-12-10 12:41:06.256775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.609 [2024-12-10 12:41:06.256783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.609 [2024-12-10 12:41:06.256792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.268921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.609 [2024-12-10 12:41:06.269413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.609 [2024-12-10 12:41:06.269474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.609 [2024-12-10 12:41:06.269507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.609 [2024-12-10 12:41:06.270030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.609 [2024-12-10 12:41:06.270224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.609 [2024-12-10 12:41:06.270236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.609 [2024-12-10 12:41:06.270245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.609 [2024-12-10 12:41:06.270254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.609 [2024-12-10 12:41:06.282056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.282450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.282471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.282481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.282670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.283017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.283031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.283040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.283053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.295188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.295572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.295594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.295604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.295793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.295981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.295992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.296001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.296009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.308339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.308715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.308737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.308747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.308936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.309125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.309136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.309145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.309153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.321468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.321971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.322029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.322060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.322727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.323281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.323292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.323301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.323310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.334586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.335024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.335046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.335056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.335251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.335440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.335451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.335460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.335468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.347627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.348075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.348097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.348107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.348306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.348501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.348514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.348523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.348532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.361036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.361373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.361395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.361406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.361601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.361796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.361807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.361823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.361832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.374325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.374736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.374757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.374770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.374958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.375146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.375157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.375173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.375182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.387462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.387905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.387926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.387936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.388124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.388319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.388331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.388340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.388349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.610 [2024-12-10 12:41:06.400564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.610 [2024-12-10 12:41:06.401023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.610 [2024-12-10 12:41:06.401044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.610 [2024-12-10 12:41:06.401055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.610 [2024-12-10 12:41:06.401252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.610 [2024-12-10 12:41:06.401451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.610 [2024-12-10 12:41:06.401463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.610 [2024-12-10 12:41:06.401472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.610 [2024-12-10 12:41:06.401480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.611 [2024-12-10 12:41:06.413621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.611 [2024-12-10 12:41:06.414085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.611 [2024-12-10 12:41:06.414106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.611 [2024-12-10 12:41:06.414115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.611 [2024-12-10 12:41:06.414312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.611 [2024-12-10 12:41:06.414505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.611 [2024-12-10 12:41:06.414517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.611 [2024-12-10 12:41:06.414525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.611 [2024-12-10 12:41:06.414534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.611 [2024-12-10 12:41:06.426745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.611 [2024-12-10 12:41:06.427196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.611 [2024-12-10 12:41:06.427217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.611 [2024-12-10 12:41:06.427226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.611 [2024-12-10 12:41:06.427415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.611 [2024-12-10 12:41:06.427603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.611 [2024-12-10 12:41:06.427614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.611 [2024-12-10 12:41:06.427623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.611 [2024-12-10 12:41:06.427631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.871 [2024-12-10 12:41:06.440109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.871 [2024-12-10 12:41:06.440573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.871 [2024-12-10 12:41:06.440631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.871 [2024-12-10 12:41:06.440664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.871 [2024-12-10 12:41:06.441188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.871 [2024-12-10 12:41:06.441377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.871 [2024-12-10 12:41:06.441388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.871 [2024-12-10 12:41:06.441397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.871 [2024-12-10 12:41:06.441405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.871 [2024-12-10 12:41:06.453371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.871 [2024-12-10 12:41:06.453829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.871 [2024-12-10 12:41:06.453886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.871 [2024-12-10 12:41:06.453919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.871 [2024-12-10 12:41:06.454430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.871 [2024-12-10 12:41:06.454609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.871 [2024-12-10 12:41:06.454620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.871 [2024-12-10 12:41:06.454632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.871 [2024-12-10 12:41:06.454640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.871 [2024-12-10 12:41:06.466506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.871 [2024-12-10 12:41:06.466924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.871 [2024-12-10 12:41:06.466944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.871 [2024-12-10 12:41:06.466953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.871 [2024-12-10 12:41:06.467131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.871 [2024-12-10 12:41:06.467339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.871 [2024-12-10 12:41:06.467351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.871 [2024-12-10 12:41:06.467359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.871 [2024-12-10 12:41:06.467368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.871 [2024-12-10 12:41:06.479665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.871 [2024-12-10 12:41:06.480135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.871 [2024-12-10 12:41:06.480156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.480172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.480361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.480550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.480561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.480569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.480578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.492721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.493130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.493204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.493238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.493726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.493915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.493926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.493934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.493943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.505776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.506222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.506284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.506317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.506822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.507001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.507011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.507019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.507027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.518843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.519317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.519376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.519409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.520061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.520590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.520602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.520610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.520619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.532001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.532471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.532527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.532559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.533225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.533682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.533693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.533701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.533710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.545258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.545720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.545741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.545751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.545939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.546128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.546138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.546147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.546155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.558379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.558823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.558845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.558855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.559043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.559238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.559251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.559260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.559268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.571405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.571848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.571869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.571879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.572068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.572264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.572276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.572285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.572294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.584556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.584981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.585001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.585011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.585216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.585405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.585416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.585424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.585433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.597794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.598234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.598257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.598267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.872 [2024-12-10 12:41:06.598461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.872 [2024-12-10 12:41:06.598656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.872 [2024-12-10 12:41:06.598667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.872 [2024-12-10 12:41:06.598676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.872 [2024-12-10 12:41:06.598685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.872 [2024-12-10 12:41:06.611117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.872 [2024-12-10 12:41:06.611570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.872 [2024-12-10 12:41:06.611593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.872 [2024-12-10 12:41:06.611603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.873 [2024-12-10 12:41:06.611797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.873 [2024-12-10 12:41:06.611991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.873 [2024-12-10 12:41:06.612003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.873 [2024-12-10 12:41:06.612011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.873 [2024-12-10 12:41:06.612020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.873 [2024-12-10 12:41:06.624320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.873 [2024-12-10 12:41:06.624773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.873 [2024-12-10 12:41:06.624831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.873 [2024-12-10 12:41:06.624863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.873 [2024-12-10 12:41:06.625533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.873 [2024-12-10 12:41:06.625944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.873 [2024-12-10 12:41:06.625955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.873 [2024-12-10 12:41:06.625963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.873 [2024-12-10 12:41:06.625972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.873 [2024-12-10 12:41:06.637411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.873 [2024-12-10 12:41:06.637852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.873 [2024-12-10 12:41:06.637873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.873 [2024-12-10 12:41:06.637883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.873 [2024-12-10 12:41:06.638072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.873 [2024-12-10 12:41:06.638267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.873 [2024-12-10 12:41:06.638279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.873 [2024-12-10 12:41:06.638288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.873 [2024-12-10 12:41:06.638296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.873 [2024-12-10 12:41:06.650499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.873 [2024-12-10 12:41:06.650955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.873 [2024-12-10 12:41:06.650977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.873 [2024-12-10 12:41:06.650987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.873 [2024-12-10 12:41:06.651183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.873 [2024-12-10 12:41:06.651372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.873 [2024-12-10 12:41:06.651383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.873 [2024-12-10 12:41:06.651391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.873 [2024-12-10 12:41:06.651400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.873 [2024-12-10 12:41:06.663541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.873 [2024-12-10 12:41:06.664005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.873 [2024-12-10 12:41:06.664063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.873 [2024-12-10 12:41:06.664094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.873 [2024-12-10 12:41:06.664759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.873 [2024-12-10 12:41:06.665105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.873 [2024-12-10 12:41:06.665116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.873 [2024-12-10 12:41:06.665128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.873 [2024-12-10 12:41:06.665137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.873 [2024-12-10 12:41:06.676587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.873 [2024-12-10 12:41:06.677007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.873 [2024-12-10 12:41:06.677028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.873 [2024-12-10 12:41:06.677037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.873 [2024-12-10 12:41:06.677241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.873 [2024-12-10 12:41:06.677430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.873 [2024-12-10 12:41:06.677441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.873 [2024-12-10 12:41:06.677450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.873 [2024-12-10 12:41:06.677458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.873 [2024-12-10 12:41:06.689730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.873 [2024-12-10 12:41:06.690181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.873 [2024-12-10 12:41:06.690202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.873 [2024-12-10 12:41:06.690212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.873 [2024-12-10 12:41:06.690400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.873 [2024-12-10 12:41:06.690589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.873 [2024-12-10 12:41:06.690600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.873 [2024-12-10 12:41:06.690608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.873 [2024-12-10 12:41:06.690617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.133 [2024-12-10 12:41:06.703118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.133 [2024-12-10 12:41:06.703566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.133 [2024-12-10 12:41:06.703587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.133 [2024-12-10 12:41:06.703596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.133 [2024-12-10 12:41:06.703785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.133 [2024-12-10 12:41:06.703974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.133 [2024-12-10 12:41:06.703984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.133 [2024-12-10 12:41:06.703993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.133 [2024-12-10 12:41:06.704002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.133 5194.25 IOPS, 20.29 MiB/s [2024-12-10T11:41:06.959Z] [2024-12-10 12:41:06.717440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.133 [2024-12-10 12:41:06.717893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.133 [2024-12-10 12:41:06.717915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.133 [2024-12-10 12:41:06.717925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.133 [2024-12-10 12:41:06.718114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.133 [2024-12-10 12:41:06.718311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.133 [2024-12-10 12:41:06.718323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.133 [2024-12-10 12:41:06.718332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.133 [2024-12-10 12:41:06.718340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.133 [2024-12-10 12:41:06.730564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.133 [2024-12-10 12:41:06.731011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.133 [2024-12-10 12:41:06.731032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.133 [2024-12-10 12:41:06.731042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.133 [2024-12-10 12:41:06.731238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.133 [2024-12-10 12:41:06.731427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.133 [2024-12-10 12:41:06.731438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.731446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.731455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.743590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.743958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.743986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.743996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.744191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.744381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.744392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.744400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.744408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.756721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.757194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.757261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.757293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.757824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.758013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.758024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.758033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.758041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.769845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.770327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.770385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.770417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.771067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.771384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.771395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.771404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.771413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.782892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.783334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.783356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.783365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.783545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.783725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.783735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.783744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.783752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.796008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.796415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.796437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.796447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.796640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.796829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.796840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.796849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.796857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.809134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.809579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.809601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.809611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.809800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.809988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.809999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.810008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.810016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.822236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.822686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.822707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.822716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.822906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.823095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.823105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.823114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.823122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.835401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.835840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.835861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.835871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.836060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.836256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.836271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.836279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.836288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.848555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.849012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.849032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.849042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.849244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.849438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.849449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.134 [2024-12-10 12:41:06.849458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.134 [2024-12-10 12:41:06.849466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.134 [2024-12-10 12:41:06.861878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.134 [2024-12-10 12:41:06.862324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.134 [2024-12-10 12:41:06.862347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.134 [2024-12-10 12:41:06.862357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.134 [2024-12-10 12:41:06.862887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.134 [2024-12-10 12:41:06.863525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.134 [2024-12-10 12:41:06.863537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.135 [2024-12-10 12:41:06.863546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.135 [2024-12-10 12:41:06.863554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.135 [2024-12-10 12:41:06.875157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.135 [2024-12-10 12:41:06.875602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.135 [2024-12-10 12:41:06.875660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.135 [2024-12-10 12:41:06.875692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.135 [2024-12-10 12:41:06.876156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.135 [2024-12-10 12:41:06.876352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.135 [2024-12-10 12:41:06.876364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.135 [2024-12-10 12:41:06.876376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.135 [2024-12-10 12:41:06.876385] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.135 [2024-12-10 12:41:06.888421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.135 [2024-12-10 12:41:06.888900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.135 [2024-12-10 12:41:06.888957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.135 [2024-12-10 12:41:06.888990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.135 [2024-12-10 12:41:06.889446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.135 [2024-12-10 12:41:06.889635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.135 [2024-12-10 12:41:06.889646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.135 [2024-12-10 12:41:06.889655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.135 [2024-12-10 12:41:06.889664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.135 [2024-12-10 12:41:06.901542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.135 [2024-12-10 12:41:06.902014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.135 [2024-12-10 12:41:06.902073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.135 [2024-12-10 12:41:06.902106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.135 [2024-12-10 12:41:06.902609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.135 [2024-12-10 12:41:06.902798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.135 [2024-12-10 12:41:06.902809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.135 [2024-12-10 12:41:06.902817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.135 [2024-12-10 12:41:06.902826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.135 [2024-12-10 12:41:06.914623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.135 [2024-12-10 12:41:06.915066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.135 [2024-12-10 12:41:06.915088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.135 [2024-12-10 12:41:06.915098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.135 [2024-12-10 12:41:06.915294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.135 [2024-12-10 12:41:06.915488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.135 [2024-12-10 12:41:06.915499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.135 [2024-12-10 12:41:06.915508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.135 [2024-12-10 12:41:06.915516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.135 [2024-12-10 12:41:06.927785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.135 [2024-12-10 12:41:06.928246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.135 [2024-12-10 12:41:06.928305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.135 [2024-12-10 12:41:06.928337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.135 [2024-12-10 12:41:06.929008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.135 [2024-12-10 12:41:06.929516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.135 [2024-12-10 12:41:06.929527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.135 [2024-12-10 12:41:06.929535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.135 [2024-12-10 12:41:06.929544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.135 [2024-12-10 12:41:06.940845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.135 [2024-12-10 12:41:06.941276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.135 [2024-12-10 12:41:06.941333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.135 [2024-12-10 12:41:06.941365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.135 [2024-12-10 12:41:06.941831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.135 [2024-12-10 12:41:06.942009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.135 [2024-12-10 12:41:06.942020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.135 [2024-12-10 12:41:06.942028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.135 [2024-12-10 12:41:06.942036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.135 [2024-12-10 12:41:06.954151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.135 [2024-12-10 12:41:06.954594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.135 [2024-12-10 12:41:06.954648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.135 [2024-12-10 12:41:06.954680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.135 [2024-12-10 12:41:06.955345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.135 [2024-12-10 12:41:06.955655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.135 [2024-12-10 12:41:06.955666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.135 [2024-12-10 12:41:06.955675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.135 [2024-12-10 12:41:06.955684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.394 [2024-12-10 12:41:06.967433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.394 [2024-12-10 12:41:06.967868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.394 [2024-12-10 12:41:06.967889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.394 [2024-12-10 12:41:06.967902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.394 [2024-12-10 12:41:06.968090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.394 [2024-12-10 12:41:06.968288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.394 [2024-12-10 12:41:06.968299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.394 [2024-12-10 12:41:06.968307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.394 [2024-12-10 12:41:06.968316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.394 [2024-12-10 12:41:06.980590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.394 [2024-12-10 12:41:06.981037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.394 [2024-12-10 12:41:06.981058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.394 [2024-12-10 12:41:06.981068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.394 [2024-12-10 12:41:06.981264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.394 [2024-12-10 12:41:06.981453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.394 [2024-12-10 12:41:06.981464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.394 [2024-12-10 12:41:06.981472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.394 [2024-12-10 12:41:06.981481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.394 [2024-12-10 12:41:06.993669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.394 [2024-12-10 12:41:06.994130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.394 [2024-12-10 12:41:06.994200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.394 [2024-12-10 12:41:06.994233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.394 [2024-12-10 12:41:06.994716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.394 [2024-12-10 12:41:06.994904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.394 [2024-12-10 12:41:06.994915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.394 [2024-12-10 12:41:06.994923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.394 [2024-12-10 12:41:06.994932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.394 [2024-12-10 12:41:07.006922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.394 [2024-12-10 12:41:07.007358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.394 [2024-12-10 12:41:07.007379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.007389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.007584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.007762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.007772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.007781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.007789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.019991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.020424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.020446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.020455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.020644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.020833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.020844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.020852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.020861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.033064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.033491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.033513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.033523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.033711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.033899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.033910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.033918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.033927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.046211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.046661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.046682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.046691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.046870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.047049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.047062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.047071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.047078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.059334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.059794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.059852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.059884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.060549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.060957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.060968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.060977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.060986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.072497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.072951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.072972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.072982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.073178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.073367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.073378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.073387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.073415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.085532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.085892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.085913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.085923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.086111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.086306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.086318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.086326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.086338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.098700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.099154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.099181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.099191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.099386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.099580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.099591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.099600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.099609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.112018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.112403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.112450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.112484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.113040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.113251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.113269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.113278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.113287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.125207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.125634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.125655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.125665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.125853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.126042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.126053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.126061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.126070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.138294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.138757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.138814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.138846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.139511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.140064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.140076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.140084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.140093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.151388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.151880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.151901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.151911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.152099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.152293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.152305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.152313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.152322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.164539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.164993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.165015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.165025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.165220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.165410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.165421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.165429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.165438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.177924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.178339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.178361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.178375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.178564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.178752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.178763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.178772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.178780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.191179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.191636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.395 [2024-12-10 12:41:07.191657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.395 [2024-12-10 12:41:07.191667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.395 [2024-12-10 12:41:07.191857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.395 [2024-12-10 12:41:07.192045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.395 [2024-12-10 12:41:07.192056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.395 [2024-12-10 12:41:07.192065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.395 [2024-12-10 12:41:07.192073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.395 [2024-12-10 12:41:07.204252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.395 [2024-12-10 12:41:07.204691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.396 [2024-12-10 12:41:07.204711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.396 [2024-12-10 12:41:07.204721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.396 [2024-12-10 12:41:07.204910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.396 [2024-12-10 12:41:07.205098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.396 [2024-12-10 12:41:07.205109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.396 [2024-12-10 12:41:07.205118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.396 [2024-12-10 12:41:07.205126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.396 [2024-12-10 12:41:07.217508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.396 [2024-12-10 12:41:07.217971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.396 [2024-12-10 12:41:07.217993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.396 [2024-12-10 12:41:07.218003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.396 [2024-12-10 12:41:07.218206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.396 [2024-12-10 12:41:07.218404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.396 [2024-12-10 12:41:07.218415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.396 [2024-12-10 12:41:07.218424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.396 [2024-12-10 12:41:07.218433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.655 [2024-12-10 12:41:07.230803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.655 [2024-12-10 12:41:07.231240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.655 [2024-12-10 12:41:07.231262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.655 [2024-12-10 12:41:07.231272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.655 [2024-12-10 12:41:07.231465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.655 [2024-12-10 12:41:07.231643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.655 [2024-12-10 12:41:07.231653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.655 [2024-12-10 12:41:07.231662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.655 [2024-12-10 12:41:07.231670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.655 [2024-12-10 12:41:07.243821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.655 [2024-12-10 12:41:07.244224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.655 [2024-12-10 12:41:07.244247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.655 [2024-12-10 12:41:07.244257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.655 [2024-12-10 12:41:07.244446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.655 [2024-12-10 12:41:07.244635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.655 [2024-12-10 12:41:07.244646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.655 [2024-12-10 12:41:07.244655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.655 [2024-12-10 12:41:07.244663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.655 [2024-12-10 12:41:07.256889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.655 [2024-12-10 12:41:07.257287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.655 [2024-12-10 12:41:07.257309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.655 [2024-12-10 12:41:07.257319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.655 [2024-12-10 12:41:07.257508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.655 [2024-12-10 12:41:07.257697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.655 [2024-12-10 12:41:07.257709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.655 [2024-12-10 12:41:07.257724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.655 [2024-12-10 12:41:07.257732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.655 [2024-12-10 12:41:07.270014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.655 [2024-12-10 12:41:07.270495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.655 [2024-12-10 12:41:07.270518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.655 [2024-12-10 12:41:07.270528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.655 [2024-12-10 12:41:07.270718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.655 [2024-12-10 12:41:07.270906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.655 [2024-12-10 12:41:07.270918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.655 [2024-12-10 12:41:07.270926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.655 [2024-12-10 12:41:07.270935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.655 [2024-12-10 12:41:07.283212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.655 [2024-12-10 12:41:07.283692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.655 [2024-12-10 12:41:07.283715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.655 [2024-12-10 12:41:07.283726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.655 [2024-12-10 12:41:07.283916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.655 [2024-12-10 12:41:07.284106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.655 [2024-12-10 12:41:07.284117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.655 [2024-12-10 12:41:07.284127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.655 [2024-12-10 12:41:07.284135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.655 [2024-12-10 12:41:07.296261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.655 [2024-12-10 12:41:07.296711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.655 [2024-12-10 12:41:07.296732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.655 [2024-12-10 12:41:07.296742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.655 [2024-12-10 12:41:07.296931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.655 [2024-12-10 12:41:07.297119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.655 [2024-12-10 12:41:07.297131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.655 [2024-12-10 12:41:07.297140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.655 [2024-12-10 12:41:07.297155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.655 [2024-12-10 12:41:07.309391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.655 [2024-12-10 12:41:07.309762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.655 [2024-12-10 12:41:07.309784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.655 [2024-12-10 12:41:07.309795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.655 [2024-12-10 12:41:07.309986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.655 [2024-12-10 12:41:07.310181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.655 [2024-12-10 12:41:07.310194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.310202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.310211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.322498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.322951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.322972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.322982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.323176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.323366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.323378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.323387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.323395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.335664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.336127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.336199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.336233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.336736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.336924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.336935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.336943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.336952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.348742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.349207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.349231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.349241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.349444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.349632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.349643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.349652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.349660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.362012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.362496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.362517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.362528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.362722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.362915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.362927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.362936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.362945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.375202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.375653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.375707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.375740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.376355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.376544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.376555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.376563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.376572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.388272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.388737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.388796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.388828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.389357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.389547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.389558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.389567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.389576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.401392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.401844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.401865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.401874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.402063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.402257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.402270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.402279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.402287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.414561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.414941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.414963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.414973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.415162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.415356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.415367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.656 [2024-12-10 12:41:07.415376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.656 [2024-12-10 12:41:07.415384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.656 [2024-12-10 12:41:07.427602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.656 [2024-12-10 12:41:07.427988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.656 [2024-12-10 12:41:07.428010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.656 [2024-12-10 12:41:07.428020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.656 [2024-12-10 12:41:07.428215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.656 [2024-12-10 12:41:07.428407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.656 [2024-12-10 12:41:07.428418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.657 [2024-12-10 12:41:07.428427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.657 [2024-12-10 12:41:07.428435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.657 [2024-12-10 12:41:07.440814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.657 [2024-12-10 12:41:07.441198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.657 [2024-12-10 12:41:07.441220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.657 [2024-12-10 12:41:07.441230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.657 [2024-12-10 12:41:07.441424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.657 [2024-12-10 12:41:07.441602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.657 [2024-12-10 12:41:07.441613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.657 [2024-12-10 12:41:07.441621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.657 [2024-12-10 12:41:07.441629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.657 [2024-12-10 12:41:07.453970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.657 [2024-12-10 12:41:07.454332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.657 [2024-12-10 12:41:07.454352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.657 [2024-12-10 12:41:07.454362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.657 [2024-12-10 12:41:07.454551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.657 [2024-12-10 12:41:07.454739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.657 [2024-12-10 12:41:07.454750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.657 [2024-12-10 12:41:07.454759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.657 [2024-12-10 12:41:07.454767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.657 [2024-12-10 12:41:07.467061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.657 [2024-12-10 12:41:07.467390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.657 [2024-12-10 12:41:07.467412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.657 [2024-12-10 12:41:07.467421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.657 [2024-12-10 12:41:07.467610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.657 [2024-12-10 12:41:07.467799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.657 [2024-12-10 12:41:07.467810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.657 [2024-12-10 12:41:07.467822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.657 [2024-12-10 12:41:07.467830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.917 [2024-12-10 12:41:07.480434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.480873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.480894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.480904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.481097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.481303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.481315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.481324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.481332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.493541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.493957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.493981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.493991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.494186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.494376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.494387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.494395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.494403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.506633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.507086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.507144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.507190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.507668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.507856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.507867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.507875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.507884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.519941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.520309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.520330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.520340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.520534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.520727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.520739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.520748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.520757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.533342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.533743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.533764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.533775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.533980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.534193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.534205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.534215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.534223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.547069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.547488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.547512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.547523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.547742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.547960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.547973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.547983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.547992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.560797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.561275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.561303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.561314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.561534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.561753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.561766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.561776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.561786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.574572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.575068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.575091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.575103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.575329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.575550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.575562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.575573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.575582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.588201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.588603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.588626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.588637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.588843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.589049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.589060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.589070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.918 [2024-12-10 12:41:07.589079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.918 [2024-12-10 12:41:07.601850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.918 [2024-12-10 12:41:07.602345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.918 [2024-12-10 12:41:07.602370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.918 [2024-12-10 12:41:07.602383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.918 [2024-12-10 12:41:07.602606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.918 [2024-12-10 12:41:07.602826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.918 [2024-12-10 12:41:07.602839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.918 [2024-12-10 12:41:07.602849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.602859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.615509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.615917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.615939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.615950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.616156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.616369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.616381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.616391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.616400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.628787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.629152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.629178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.629189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.629383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.629577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.629588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.629597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.629606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.642072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.642417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.642439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.642449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.642643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.642836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.642851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.642860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.642869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.655251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.655569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.655590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.655600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.655789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.655977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.655988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.655996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.656005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.668316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.668802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.668859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.668891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.669430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.669620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.669631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.669640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.669648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.681447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.681842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.681863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.681879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.682068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.682263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.682275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.682284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.682295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.694502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.694972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.694993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.695004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.695198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.695389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.695400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.695409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.695417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.707642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.708022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.708044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.708054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.708248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.708437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.708448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.708457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.708465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 4155.40 IOPS, 16.23 MiB/s [2024-12-10T11:41:07.745Z] [2024-12-10 12:41:07.720682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.721108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.721129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.721140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.721334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.721523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.721534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.919 [2024-12-10 12:41:07.721543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.919 [2024-12-10 12:41:07.721551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.919 [2024-12-10 12:41:07.733763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.919 [2024-12-10 12:41:07.734187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.919 [2024-12-10 12:41:07.734246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.919 [2024-12-10 12:41:07.734279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.919 [2024-12-10 12:41:07.734758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.919 [2024-12-10 12:41:07.734947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.919 [2024-12-10 12:41:07.734958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.920 [2024-12-10 12:41:07.734967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.920 [2024-12-10 12:41:07.734975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.180 [2024-12-10 12:41:07.747049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.180 [2024-12-10 12:41:07.747424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.180 [2024-12-10 12:41:07.747446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.180 [2024-12-10 12:41:07.747457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.180 [2024-12-10 12:41:07.747652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.180 [2024-12-10 12:41:07.747845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.180 [2024-12-10 12:41:07.747856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.180 [2024-12-10 12:41:07.747865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.180 [2024-12-10 12:41:07.747873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.180 [2024-12-10 12:41:07.760076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.180 [2024-12-10 12:41:07.760473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.180 [2024-12-10 12:41:07.760494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.180 [2024-12-10 12:41:07.760504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.180 [2024-12-10 12:41:07.760692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.180 [2024-12-10 12:41:07.760880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.180 [2024-12-10 12:41:07.760891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.180 [2024-12-10 12:41:07.760900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.180 [2024-12-10 12:41:07.760907] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.180 [2024-12-10 12:41:07.773189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.180 [2024-12-10 12:41:07.773646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.180 [2024-12-10 12:41:07.773667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.180 [2024-12-10 12:41:07.773680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.180 [2024-12-10 12:41:07.773868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.181 [2024-12-10 12:41:07.774056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.181 [2024-12-10 12:41:07.774066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.181 [2024-12-10 12:41:07.774075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.181 [2024-12-10 12:41:07.774083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.181 [2024-12-10 12:41:07.786354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.181 [2024-12-10 12:41:07.786811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.181 [2024-12-10 12:41:07.786831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.181 [2024-12-10 12:41:07.786841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.181 [2024-12-10 12:41:07.787029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.181 [2024-12-10 12:41:07.787226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.181 [2024-12-10 12:41:07.787237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.181 [2024-12-10 12:41:07.787246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.181 [2024-12-10 12:41:07.787254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.181 [2024-12-10 12:41:07.799591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.181 [2024-12-10 12:41:07.800066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.181 [2024-12-10 12:41:07.800134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.181 [2024-12-10 12:41:07.800181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.181 [2024-12-10 12:41:07.800769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.181 [2024-12-10 12:41:07.800957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.181 [2024-12-10 12:41:07.800968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.181 [2024-12-10 12:41:07.800977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.181 [2024-12-10 12:41:07.800985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.181 [2024-12-10 12:41:07.812763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.181 [2024-12-10 12:41:07.813214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.181 [2024-12-10 12:41:07.813284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.181 [2024-12-10 12:41:07.813317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.181 [2024-12-10 12:41:07.813853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.181 [2024-12-10 12:41:07.814041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.181 [2024-12-10 12:41:07.814052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.181 [2024-12-10 12:41:07.814061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.181 [2024-12-10 12:41:07.814069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3895961 Killed "${NVMF_APP[@]}" "$@" 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:01.181 [2024-12-10 12:41:07.826197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.181 [2024-12-10 12:41:07.826666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.181 [2024-12-10 12:41:07.826687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.181 [2024-12-10 12:41:07.826697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.181 [2024-12-10 12:41:07.826890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.181 [2024-12-10 12:41:07.827084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.181 [2024-12-10 12:41:07.827095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.181 [2024-12-10 12:41:07.827104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.181 [2024-12-10 12:41:07.827113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3897557 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3897557 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3897557 ']' 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:01.181 12:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:01.181 [2024-12-10 12:41:07.839501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.181 [2024-12-10 12:41:07.839960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.181 [2024-12-10 12:41:07.839981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.181 [2024-12-10 12:41:07.839994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.181 [2024-12-10 12:41:07.840195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.181 [2024-12-10 12:41:07.840390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.181 [2024-12-10 12:41:07.840401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.181 [2024-12-10 12:41:07.840411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.181 [2024-12-10 12:41:07.840419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.181 [2024-12-10 12:41:07.852791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.181 [2024-12-10 12:41:07.853280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.181 [2024-12-10 12:41:07.853303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.181 [2024-12-10 12:41:07.853314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.181 [2024-12-10 12:41:07.853508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.181 [2024-12-10 12:41:07.853702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.181 [2024-12-10 12:41:07.853713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.181 [2024-12-10 12:41:07.853723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.181 [2024-12-10 12:41:07.853731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.181 [2024-12-10 12:41:07.866171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.181 [2024-12-10 12:41:07.866564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.181 [2024-12-10 12:41:07.866586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.181 [2024-12-10 12:41:07.866597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.181 [2024-12-10 12:41:07.866792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.181 [2024-12-10 12:41:07.866995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.181 [2024-12-10 12:41:07.867005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.181 [2024-12-10 12:41:07.867015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.181 [2024-12-10 12:41:07.867023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.181 [2024-12-10 12:41:07.879521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.181 [2024-12-10 12:41:07.880025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.181 [2024-12-10 12:41:07.880048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.181 [2024-12-10 12:41:07.880059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.181 [2024-12-10 12:41:07.880264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.880466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.880478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.880487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.880496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:07.892974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:07.893392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:07.893415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:07.893427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:07.893624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.893822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.893833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.893843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.893852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:07.906388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:07.906902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:07.906926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:07.906938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:07.907138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.907342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.907354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.907364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.907373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:07.909029] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:38:01.182 [2024-12-10 12:41:07.909119] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:01.182 [2024-12-10 12:41:07.919869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:07.920387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:07.920410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:07.920422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:07.920624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.920821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.920832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.920842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.920851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:07.933330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:07.933791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:07.933815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:07.933827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:07.934025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.934230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.934243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.934253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.934262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:07.946718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:07.947213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:07.947237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:07.947249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:07.947447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.947645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.947656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.947666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.947675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:07.960072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:07.960574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:07.960597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:07.960608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:07.960807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.961004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.961019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.961029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.961039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:07.973382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:07.973859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:07.973881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:07.973892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:07.974091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.974296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.974308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.974318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.974327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:07.986800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:07.987193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:07.987216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:07.987226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:07.987424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:07.987621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:07.987632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:07.987642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:07.987651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.182 [2024-12-10 12:41:08.000221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.182 [2024-12-10 12:41:08.000682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.182 [2024-12-10 12:41:08.000703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.182 [2024-12-10 12:41:08.000714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.182 [2024-12-10 12:41:08.000912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.182 [2024-12-10 12:41:08.001109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.182 [2024-12-10 12:41:08.001121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.182 [2024-12-10 12:41:08.001130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.182 [2024-12-10 12:41:08.001143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.443 [2024-12-10 12:41:08.013598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.443 [2024-12-10 12:41:08.014066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.443 [2024-12-10 12:41:08.014087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.443 [2024-12-10 12:41:08.014098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.443 [2024-12-10 12:41:08.014315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.443 [2024-12-10 12:41:08.014513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.443 [2024-12-10 12:41:08.014524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.443 [2024-12-10 12:41:08.014534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.443 [2024-12-10 12:41:08.014543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.443 [2024-12-10 12:41:08.026958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.443 [2024-12-10 12:41:08.027429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.443 [2024-12-10 12:41:08.027450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.443 [2024-12-10 12:41:08.027461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.443 [2024-12-10 12:41:08.027659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.443 [2024-12-10 12:41:08.027855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.443 [2024-12-10 12:41:08.027865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.443 [2024-12-10 12:41:08.027874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.443 [2024-12-10 12:41:08.027883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.443 [2024-12-10 12:41:08.031376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:01.443 [2024-12-10 12:41:08.040298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.443 [2024-12-10 12:41:08.040757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.443 [2024-12-10 12:41:08.040778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.443 [2024-12-10 12:41:08.040789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.443 [2024-12-10 12:41:08.040982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.443 [2024-12-10 12:41:08.041179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.443 [2024-12-10 12:41:08.041190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.443 [2024-12-10 12:41:08.041200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.443 [2024-12-10 12:41:08.041209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.443 [2024-12-10 12:41:08.053522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.443 [2024-12-10 12:41:08.053918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.443 [2024-12-10 12:41:08.053940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.443 [2024-12-10 12:41:08.053950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.443 [2024-12-10 12:41:08.054144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.443 [2024-12-10 12:41:08.054363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.443 [2024-12-10 12:41:08.054375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.443 [2024-12-10 12:41:08.054393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.443 [2024-12-10 12:41:08.054402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.443 [2024-12-10 12:41:08.066609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.443 [2024-12-10 12:41:08.067075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.443 [2024-12-10 12:41:08.067097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.443 [2024-12-10 12:41:08.067108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.443 [2024-12-10 12:41:08.067309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.443 [2024-12-10 12:41:08.067500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.443 [2024-12-10 12:41:08.067511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.443 [2024-12-10 12:41:08.067520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.443 [2024-12-10 12:41:08.067529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.443 [2024-12-10 12:41:08.079964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.443 [2024-12-10 12:41:08.080431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.443 [2024-12-10 12:41:08.080452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.443 [2024-12-10 12:41:08.080462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.443 [2024-12-10 12:41:08.080655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.443 [2024-12-10 12:41:08.080847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.443 [2024-12-10 12:41:08.080857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.443 [2024-12-10 12:41:08.080866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.443 [2024-12-10 12:41:08.080875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.443 [2024-12-10 12:41:08.093335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.443 [2024-12-10 12:41:08.093805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.443 [2024-12-10 12:41:08.093827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.443 [2024-12-10 12:41:08.093840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.094033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.094233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.094244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.094254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.094263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.106693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.107041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.107062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.107073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.107278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.107495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.107506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.107516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.107525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.120156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.120621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.120643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.120653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.120851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.121048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.121059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.121069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.121077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.133447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.133880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.133901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.133911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.134105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.134303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.134315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.134324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.134333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.144658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:01.444 [2024-12-10 12:41:08.144687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:01.444 [2024-12-10 12:41:08.144697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:01.444 [2024-12-10 12:41:08.144708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:01.444 [2024-12-10 12:41:08.144715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:01.444 [2024-12-10 12:41:08.146701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.146926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:01.444 [2024-12-10 12:41:08.147006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:01.444 [2024-12-10 12:41:08.147011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:01.444 [2024-12-10 12:41:08.147171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.147192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.147203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.147400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.147598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.147609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.147618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.147628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.160150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.160564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.160589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.160601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.160802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.161003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.161014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.161024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.161034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.173529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.174008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.174030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.174041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.174244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.174443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.174455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.174464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.174474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.186950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.187399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.187421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.187432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.187630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.187827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.187839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.187848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.187858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.200269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.200738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.200760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.200770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.200968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.201172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.201185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.444 [2024-12-10 12:41:08.201195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.444 [2024-12-10 12:41:08.201205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.444 [2024-12-10 12:41:08.213679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.444 [2024-12-10 12:41:08.214147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.444 [2024-12-10 12:41:08.214179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.444 [2024-12-10 12:41:08.214190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.444 [2024-12-10 12:41:08.214388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.444 [2024-12-10 12:41:08.214584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.444 [2024-12-10 12:41:08.214596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.445 [2024-12-10 12:41:08.214605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.445 [2024-12-10 12:41:08.214614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.445 [2024-12-10 12:41:08.227082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.445 [2024-12-10 12:41:08.227579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.445 [2024-12-10 12:41:08.227605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.445 [2024-12-10 12:41:08.227617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.445 [2024-12-10 12:41:08.227817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.445 [2024-12-10 12:41:08.228016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.445 [2024-12-10 12:41:08.228027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.445 [2024-12-10 12:41:08.228037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.445 [2024-12-10 12:41:08.228048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.445 [2024-12-10 12:41:08.240571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.445 [2024-12-10 12:41:08.241039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.445 [2024-12-10 12:41:08.241065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.445 [2024-12-10 12:41:08.241077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.445 [2024-12-10 12:41:08.241285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.445 [2024-12-10 12:41:08.241487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.445 [2024-12-10 12:41:08.241498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.445 [2024-12-10 12:41:08.241507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.445 [2024-12-10 12:41:08.241518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.445 [2024-12-10 12:41:08.254018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.445 [2024-12-10 12:41:08.254493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.445 [2024-12-10 12:41:08.254515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.445 [2024-12-10 12:41:08.254526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.445 [2024-12-10 12:41:08.254729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.445 [2024-12-10 12:41:08.254928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.445 [2024-12-10 12:41:08.254940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.445 [2024-12-10 12:41:08.254949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.445 [2024-12-10 12:41:08.254959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.445 [2024-12-10 12:41:08.267428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.705 [2024-12-10 12:41:08.267896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.705 [2024-12-10 12:41:08.267918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.705 [2024-12-10 12:41:08.267928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.705 [2024-12-10 12:41:08.268126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.705 [2024-12-10 12:41:08.268332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.705 [2024-12-10 12:41:08.268344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.705 [2024-12-10 12:41:08.268354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.705 [2024-12-10 12:41:08.268363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.705 [2024-12-10 12:41:08.280818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.705 [2024-12-10 12:41:08.281292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.705 [2024-12-10 12:41:08.281315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.705 [2024-12-10 12:41:08.281326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.705 [2024-12-10 12:41:08.281524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.705 [2024-12-10 12:41:08.281721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.705 [2024-12-10 12:41:08.281732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.705 [2024-12-10 12:41:08.281741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.705 [2024-12-10 12:41:08.281750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.705 [2024-12-10 12:41:08.294193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.705 [2024-12-10 12:41:08.294671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.705 [2024-12-10 12:41:08.294694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.705 [2024-12-10 12:41:08.294705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.705 [2024-12-10 12:41:08.294902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.705 [2024-12-10 12:41:08.295099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.705 [2024-12-10 12:41:08.295115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.705 [2024-12-10 12:41:08.295124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.705 [2024-12-10 12:41:08.295134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.705 [2024-12-10 12:41:08.307557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.705 [2024-12-10 12:41:08.308031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.705 [2024-12-10 12:41:08.308053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.705 [2024-12-10 12:41:08.308064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.705 [2024-12-10 12:41:08.308297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.705 [2024-12-10 12:41:08.308495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.705 [2024-12-10 12:41:08.308507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.705 [2024-12-10 12:41:08.308516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.705 [2024-12-10 12:41:08.308525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.705 [2024-12-10 12:41:08.320953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.705 [2024-12-10 12:41:08.321410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.705 [2024-12-10 12:41:08.321433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.705 [2024-12-10 12:41:08.321444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.705 [2024-12-10 12:41:08.321641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.705 [2024-12-10 12:41:08.321838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.705 [2024-12-10 12:41:08.321850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.705 [2024-12-10 12:41:08.321860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.705 [2024-12-10 12:41:08.321869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.705 [2024-12-10 12:41:08.334314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.705 [2024-12-10 12:41:08.334753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.705 [2024-12-10 12:41:08.334775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.705 [2024-12-10 12:41:08.334786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.705 [2024-12-10 12:41:08.334988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.705 [2024-12-10 12:41:08.335192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.705 [2024-12-10 12:41:08.335205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.705 [2024-12-10 12:41:08.335217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.705 [2024-12-10 12:41:08.335227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.705 [2024-12-10 12:41:08.347632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.705 [2024-12-10 12:41:08.348099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.705 [2024-12-10 12:41:08.348122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.705 [2024-12-10 12:41:08.348134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.705 [2024-12-10 12:41:08.348337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.705 [2024-12-10 12:41:08.348535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.705 [2024-12-10 12:41:08.348546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.705 [2024-12-10 12:41:08.348556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.705 [2024-12-10 12:41:08.348566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.360969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.361430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.361453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.361464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.361661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.361857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.361868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.361878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.361887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.374335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.374750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.374775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.374788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.374988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.375195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.375208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.375218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.375228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.387740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.388138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.388163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.388182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.388383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.388582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.388594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.388604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.388613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.401073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.401403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.401425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.401436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.401634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.401831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.401842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.401851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.401860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.414477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.414963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.414986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.414996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.415200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.415398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.415410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.415419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.415429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.427857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.428325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.428348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.428362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.428560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.428756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.428768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.428777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.428786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.441204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.441670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.441721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.441732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.441930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.442125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.442136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.442146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.442155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.454562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.455025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.455048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.455058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.455291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.455487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.455499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.455508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.455517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.467910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.468298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.468321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.468332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.468532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.468730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.468741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.468750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.468760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.481354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.706 [2024-12-10 12:41:08.481820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.706 [2024-12-10 12:41:08.481842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.706 [2024-12-10 12:41:08.481853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.706 [2024-12-10 12:41:08.482048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.706 [2024-12-10 12:41:08.482250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.706 [2024-12-10 12:41:08.482263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.706 [2024-12-10 12:41:08.482273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.706 [2024-12-10 12:41:08.482281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.706 [2024-12-10 12:41:08.494690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.707 [2024-12-10 12:41:08.495186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.707 [2024-12-10 12:41:08.495209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.707 [2024-12-10 12:41:08.495221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.707 [2024-12-10 12:41:08.495418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.707 [2024-12-10 12:41:08.495614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.707 [2024-12-10 12:41:08.495626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.707 [2024-12-10 12:41:08.495635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.707 [2024-12-10 12:41:08.495644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.707 [2024-12-10 12:41:08.508071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.707 [2024-12-10 12:41:08.508542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.707 [2024-12-10 12:41:08.508565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.707 [2024-12-10 12:41:08.508575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.707 [2024-12-10 12:41:08.508770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.707 [2024-12-10 12:41:08.508980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.707 [2024-12-10 12:41:08.508995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.707 [2024-12-10 12:41:08.509005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.707 [2024-12-10 12:41:08.509013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.707 [2024-12-10 12:41:08.521446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.707 [2024-12-10 12:41:08.521925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.707 [2024-12-10 12:41:08.521948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.707 [2024-12-10 12:41:08.521959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.707 [2024-12-10 12:41:08.522156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.707 [2024-12-10 12:41:08.522358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.707 [2024-12-10 12:41:08.522370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.707 [2024-12-10 12:41:08.522380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.707 [2024-12-10 12:41:08.522389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.967 [2024-12-10 12:41:08.534827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.967 [2024-12-10 12:41:08.535244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.967 [2024-12-10 12:41:08.535267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.967 [2024-12-10 12:41:08.535277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.967 [2024-12-10 12:41:08.535476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.967 [2024-12-10 12:41:08.535672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.967 [2024-12-10 12:41:08.535684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.967 [2024-12-10 12:41:08.535693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.967 [2024-12-10 12:41:08.535703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.967 [2024-12-10 12:41:08.548117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.967 [2024-12-10 12:41:08.548494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.967 [2024-12-10 12:41:08.548516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.967 [2024-12-10 12:41:08.548528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.967 [2024-12-10 12:41:08.548725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.967 [2024-12-10 12:41:08.548921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.548933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.548942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.548954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.561562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.561957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.561978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.561989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.562189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.562386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.562397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.562406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.562415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.575002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.575326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.575348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.575358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.575554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.575749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.575761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.575770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.575779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.588369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.588833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.588855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.588866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.589060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.589263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.589275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.589285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.589294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.601698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.602174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.602196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.602206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.602401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.602596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.602608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.602617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.602626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.615056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.615508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.615531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.615541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.615737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.615933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.615945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.615955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.615964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.628367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.628823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.628845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.628856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.629057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.629259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.629271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.629281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.629290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.641693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.642154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.642181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.642197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.642392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.642587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.642600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.642609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.642618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.655007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.655473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.655495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.655505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.655701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.655896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.655908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.968 [2024-12-10 12:41:08.655918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.968 [2024-12-10 12:41:08.655927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.968 [2024-12-10 12:41:08.668325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.968 [2024-12-10 12:41:08.668788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.968 [2024-12-10 12:41:08.668810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.968 [2024-12-10 12:41:08.668821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.968 [2024-12-10 12:41:08.669015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.968 [2024-12-10 12:41:08.669216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.968 [2024-12-10 12:41:08.669229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.669238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.669247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 [2024-12-10 12:41:08.681637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 [2024-12-10 12:41:08.682101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.682123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.682133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.682333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 [2024-12-10 12:41:08.682535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.969 [2024-12-10 12:41:08.682547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.682556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.682565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 [2024-12-10 12:41:08.694949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 [2024-12-10 12:41:08.695317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.695339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.695350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.695544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 [2024-12-10 12:41:08.695737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.969 [2024-12-10 12:41:08.695748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.695758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.695766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 [2024-12-10 12:41:08.708338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 [2024-12-10 12:41:08.708767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.708788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.708799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.708993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 [2024-12-10 12:41:08.709194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.969 [2024-12-10 12:41:08.709206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.709216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.709224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 3462.83 IOPS, 13.53 MiB/s [2024-12-10T11:41:08.795Z] 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:01.969 [2024-12-10 12:41:08.722994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:01.969 [2024-12-10 12:41:08.723437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.723463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.723475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.723670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:01.969 [2024-12-10 12:41:08.723870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.969 [2024-12-10 12:41:08.723882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.723891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.723901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:01.969 [2024-12-10 12:41:08.736324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 [2024-12-10 12:41:08.736695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.736717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.736728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.736923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 [2024-12-10 12:41:08.737119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.969 [2024-12-10 12:41:08.737131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.737140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.737149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 [2024-12-10 12:41:08.749743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 [2024-12-10 12:41:08.750177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.750199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.750210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.750404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 [2024-12-10 12:41:08.750601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.969 [2024-12-10 12:41:08.750612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.750621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.750630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:01.969 [2024-12-10 12:41:08.763036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 [2024-12-10 12:41:08.763508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.763534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.763545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.763740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 [2024-12-10 12:41:08.763934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.969 [2024-12-10 12:41:08.763945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.763955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.763963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 [2024-12-10 12:41:08.764330] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.969 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:01.969 [2024-12-10 12:41:08.776374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 [2024-12-10 12:41:08.776851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.776874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.776885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.777079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 [2024-12-10 12:41:08.777281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.969 [2024-12-10 12:41:08.777294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.969 [2024-12-10 12:41:08.777303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.969 [2024-12-10 12:41:08.777312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.969 [2024-12-10 12:41:08.789711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.969 [2024-12-10 12:41:08.790177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.969 [2024-12-10 12:41:08.790199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.969 [2024-12-10 12:41:08.790210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.969 [2024-12-10 12:41:08.790405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.969 [2024-12-10 12:41:08.790599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.970 [2024-12-10 12:41:08.790611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.970 [2024-12-10 12:41:08.790620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.970 [2024-12-10 12:41:08.790630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.229 [2024-12-10 12:41:08.803080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.229 [2024-12-10 12:41:08.803501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.229 [2024-12-10 12:41:08.803528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.229 [2024-12-10 12:41:08.803540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.229 [2024-12-10 12:41:08.803739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.229 [2024-12-10 12:41:08.803939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.229 [2024-12-10 12:41:08.803952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.229 [2024-12-10 12:41:08.803962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.229 [2024-12-10 12:41:08.803973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.229 [2024-12-10 12:41:08.816524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.229 [2024-12-10 12:41:08.816991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.229 [2024-12-10 12:41:08.817014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.229 [2024-12-10 12:41:08.817025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.229 [2024-12-10 12:41:08.817231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.229 [2024-12-10 12:41:08.817431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.229 [2024-12-10 12:41:08.817450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.229 [2024-12-10 12:41:08.817460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.229 [2024-12-10 12:41:08.817470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.229 [2024-12-10 12:41:08.829938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.229 [2024-12-10 12:41:08.830396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.229 [2024-12-10 12:41:08.830418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.229 [2024-12-10 12:41:08.830429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.229 [2024-12-10 12:41:08.830626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.229 [2024-12-10 12:41:08.830822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.229 [2024-12-10 12:41:08.830834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.229 [2024-12-10 12:41:08.830844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.229 [2024-12-10 12:41:08.830853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.229 [2024-12-10 12:41:08.843329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.229 [2024-12-10 12:41:08.843731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.229 [2024-12-10 12:41:08.843753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.229 [2024-12-10 12:41:08.843767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.229 [2024-12-10 12:41:08.843965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.229 [2024-12-10 12:41:08.844160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.229 [2024-12-10 12:41:08.844177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.229 [2024-12-10 12:41:08.844187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.229 [2024-12-10 12:41:08.844196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.229 [2024-12-10 12:41:08.856644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.229 [2024-12-10 12:41:08.857081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.229 [2024-12-10 12:41:08.857103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.229 [2024-12-10 12:41:08.857113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.229 [2024-12-10 12:41:08.857315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.229 [2024-12-10 12:41:08.857512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.229 [2024-12-10 12:41:08.857523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.229 [2024-12-10 12:41:08.857533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.229 [2024-12-10 12:41:08.857542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.229 Malloc0 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.229 [2024-12-10 12:41:08.869984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.229 [2024-12-10 12:41:08.870355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.229 [2024-12-10 12:41:08.870377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.229 [2024-12-10 12:41:08.870388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.229 [2024-12-10 12:41:08.870585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.229 [2024-12-10 12:41:08.870781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.229 [2024-12-10 12:41:08.870793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.229 [2024-12-10 12:41:08.870803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.229 [2024-12-10 12:41:08.870812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:02.229 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.230 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.230 [2024-12-10 12:41:08.883421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.230 [2024-12-10 12:41:08.883423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.230 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.230 12:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3896520 00:38:02.230 [2024-12-10 12:41:09.001483] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:38:04.101 3947.71 IOPS, 15.42 MiB/s [2024-12-10T11:41:11.875Z] 4683.12 IOPS, 18.29 MiB/s [2024-12-10T11:41:12.811Z] 5262.33 IOPS, 20.56 MiB/s [2024-12-10T11:41:13.748Z] 5717.00 IOPS, 22.33 MiB/s [2024-12-10T11:41:15.126Z] 6087.00 IOPS, 23.78 MiB/s [2024-12-10T11:41:16.060Z] 6390.83 IOPS, 24.96 MiB/s [2024-12-10T11:41:16.996Z] 6652.31 IOPS, 25.99 MiB/s [2024-12-10T11:41:17.932Z] 6868.00 IOPS, 26.83 MiB/s [2024-12-10T11:41:17.932Z] 7055.73 IOPS, 27.56 MiB/s 00:38:11.106 Latency(us) 00:38:11.106 [2024-12-10T11:41:17.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.106 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:11.106 Verification LBA range: start 0x0 length 0x4000 00:38:11.106 Nvme1n1 : 15.01 7058.96 27.57 12270.10 0.00 6600.67 729.48 29459.99 00:38:11.106 [2024-12-10T11:41:17.932Z] =================================================================================================================== 00:38:11.106 [2024-12-10T11:41:17.932Z] Total : 7058.96 27.57 12270.10 0.00 6600.67 729.48 29459.99 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:12.044 rmmod nvme_tcp 00:38:12.044 rmmod nvme_fabrics 00:38:12.044 rmmod nvme_keyring 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3897557 ']' 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3897557 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3897557 ']' 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3897557 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3897557 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3897557' 00:38:12.044 killing process with pid 3897557 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3897557 00:38:12.044 12:41:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3897557 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:13.422 12:41:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:15.979 00:38:15.979 real 0m29.563s 00:38:15.979 user 1m14.093s 00:38:15.979 sys 0m6.571s 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:15.979 ************************************ 00:38:15.979 END TEST nvmf_bdevperf 00:38:15.979 ************************************ 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.979 ************************************ 00:38:15.979 START TEST nvmf_target_disconnect 00:38:15.979 ************************************ 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:15.979 * Looking for test storage... 00:38:15.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:15.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.979 --rc genhtml_branch_coverage=1 00:38:15.979 --rc genhtml_function_coverage=1 00:38:15.979 --rc genhtml_legend=1 00:38:15.979 --rc geninfo_all_blocks=1 00:38:15.979 --rc geninfo_unexecuted_blocks=1 00:38:15.979 00:38:15.979 ' 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:15.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.979 --rc genhtml_branch_coverage=1 00:38:15.979 --rc genhtml_function_coverage=1 00:38:15.979 --rc genhtml_legend=1 00:38:15.979 --rc geninfo_all_blocks=1 00:38:15.979 --rc geninfo_unexecuted_blocks=1 00:38:15.979 00:38:15.979 ' 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:15.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.979 --rc genhtml_branch_coverage=1 00:38:15.979 --rc genhtml_function_coverage=1 00:38:15.979 --rc genhtml_legend=1 00:38:15.979 --rc geninfo_all_blocks=1 00:38:15.979 --rc geninfo_unexecuted_blocks=1 00:38:15.979 00:38:15.979 ' 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:15.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.979 --rc genhtml_branch_coverage=1 00:38:15.979 --rc genhtml_function_coverage=1 00:38:15.979 --rc genhtml_legend=1 00:38:15.979 --rc geninfo_all_blocks=1 00:38:15.979 --rc geninfo_unexecuted_blocks=1 00:38:15.979 00:38:15.979 ' 00:38:15.979 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:15.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:15.980 12:41:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:21.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:21.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:21.348 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:21.349 Found net devices under 0000:af:00.0: cvl_0_0 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:21.349 Found net devices under 0000:af:00.1: cvl_0_1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:21.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:21.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:38:21.349 00:38:21.349 --- 10.0.0.2 ping statistics --- 00:38:21.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:21.349 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:21.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:21.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:38:21.349 00:38:21.349 --- 10.0.0.1 ping statistics --- 00:38:21.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:21.349 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:21.349 ************************************ 00:38:21.349 START TEST nvmf_target_disconnect_tc1 00:38:21.349 ************************************ 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:21.349 [2024-12-10 12:41:27.839844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:21.349 [2024-12-10 12:41:27.839911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325800 with addr=10.0.0.2, port=4420 00:38:21.349 [2024-12-10 12:41:27.840004] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:21.349 [2024-12-10 12:41:27.840017] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:21.349 [2024-12-10 12:41:27.840028] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:21.349 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:21.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:21.349 Initializing NVMe Controllers 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:21.349 00:38:21.349 real 0m0.187s 00:38:21.349 user 0m0.076s 00:38:21.349 sys 0m0.111s 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:21.349 ************************************ 00:38:21.349 END TEST nvmf_target_disconnect_tc1 00:38:21.349 ************************************ 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:21.349 ************************************ 00:38:21.349 START TEST nvmf_target_disconnect_tc2 00:38:21.349 ************************************ 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:21.349 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3902858 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3902858 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3902858 ']' 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:21.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:21.350 12:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:21.350 [2024-12-10 12:41:28.006842] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:38:21.350 [2024-12-10 12:41:28.006928] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:21.350 [2024-12-10 12:41:28.139567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:21.608 [2024-12-10 12:41:28.249300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:21.608 [2024-12-10 12:41:28.249343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:21.608 [2024-12-10 12:41:28.249353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:21.608 [2024-12-10 12:41:28.249382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:21.608 [2024-12-10 12:41:28.249390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:21.608 [2024-12-10 12:41:28.251751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:21.608 [2024-12-10 12:41:28.251839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:21.608 [2024-12-10 12:41:28.251925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:21.608 [2024-12-10 12:41:28.251945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.174 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.174 Malloc0 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.175 [2024-12-10 12:41:28.951172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.175 [2024-12-10 12:41:28.979449] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3903001 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:22.175 12:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:24.739 12:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3902858 00:38:24.739 12:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 [2024-12-10 12:41:31.017623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.739 starting I/O failed 00:38:24.739 Read completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.739 starting I/O failed 00:38:24.739 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 [2024-12-10 12:41:31.018008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Write completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 [2024-12-10 12:41:31.018378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.740 Read completed with error (sct=0, sc=8) 00:38:24.740 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Read completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 Write completed with error (sct=0, sc=8) 00:38:24.741 starting I/O failed 00:38:24.741 [2024-12-10 12:41:31.018725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.741 [2024-12-10 12:41:31.019015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.019084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.019398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.019446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.019722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.019766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.020033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.020076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.020345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.020391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.020659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.020702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.021023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.021074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.021343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.021387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.021606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.021647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.021863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.021905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.022239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.022260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.022529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.741 [2024-12-10 12:41:31.022593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.741 qpair failed and we were unable to recover it. 00:38:24.741 [2024-12-10 12:41:31.022811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.022858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.023112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.023155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.023324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.023367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.023582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.023625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.023786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.023829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.024120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.024163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.024367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.024409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.024636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.024679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.024901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.024944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.025154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.025211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.025378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.025420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.025694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.025737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.025930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.025973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.026134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.026187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.026332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.026346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.026488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.026501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.026704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.026718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.026800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.026813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.026990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.027004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.742 qpair failed and we were unable to recover it. 00:38:24.742 [2024-12-10 12:41:31.027208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.742 [2024-12-10 12:41:31.027222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.027428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.027441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.027648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.027695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.027881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.027905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.028080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.028102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.028266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.028288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.028374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.028393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.028551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.028572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.028795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.028811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.029016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.029029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.029235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.029250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.029389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.029403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.029686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.029700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.029842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.029855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.030104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.030117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.030214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.030229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.030302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.030315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.030410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.030423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.030568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.030582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.030719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.030733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.030882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.030896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.031064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.031077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.031152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.743 [2024-12-10 12:41:31.031165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.743 qpair failed and we were unable to recover it. 00:38:24.743 [2024-12-10 12:41:31.031268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.031280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.031372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.031385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.031515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.031528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.031705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.031718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.031921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.031934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.032147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.032161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.032412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.032426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.032679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.032692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.032878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.032892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.033037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.033051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.033257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.033272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.033488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.033501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.033655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.033668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.033813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.033827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.033974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.033987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.034207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.034250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.034388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.034430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.034643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.034686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.034826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.034867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.035145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.744 [2024-12-10 12:41:31.035200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.744 qpair failed and we were unable to recover it. 00:38:24.744 [2024-12-10 12:41:31.035413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.035458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.035711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.035735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.036020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.036035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.036239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.036253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.036408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.036421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.036654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.036697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.036979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.037020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.037292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.037306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.037543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.037557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.037693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.037712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.037948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.037992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.038281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.038325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.038569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.038618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.038907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.038949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.039164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.039216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.039524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.039566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.039866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.039908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.040176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.040220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.040436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.040450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.040650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.040663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.040822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.040836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.041002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.041050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.041335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.041379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.041587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.745 [2024-12-10 12:41:31.041629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.745 qpair failed and we were unable to recover it. 00:38:24.745 [2024-12-10 12:41:31.041902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.041944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.042162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.042205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.042358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.042372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.042597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.042611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.042780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.042794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.042951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.042965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.043133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.043147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.043400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.043444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.043653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.043696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.043914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.043955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.044242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.044286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.044567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.044609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.044903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.044942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.045185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.045199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.045340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.045353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.045603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.045630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.045813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.045840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.046034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.046060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.046224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.046240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.046397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.046410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.046562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.746 [2024-12-10 12:41:31.046575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.746 qpair failed and we were unable to recover it. 00:38:24.746 [2024-12-10 12:41:31.046818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.046831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.047064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.047107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.047354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.047398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.047687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.047729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.048004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.048046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.048258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.048302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.048543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.048557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.048780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.048795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.048996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.049010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.049254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.049268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.049484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.049498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.049724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.049738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.049962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.747 [2024-12-10 12:41:31.049976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.747 qpair failed and we were unable to recover it. 00:38:24.747 [2024-12-10 12:41:31.050115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.050129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.050362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.050406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.050702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.050743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.050977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.051019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.051215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.051260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.051544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.051594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.051821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.051863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.052074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.052115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.052407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.052421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.052616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.052629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.052705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.052718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.052858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.052872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.053014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.053028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.053245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.053260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.053429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.053442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.053584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.053597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.053842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.053856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.053942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.053956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.054185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.054204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.054465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.054478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.054703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.054717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.054920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.054935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.055090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.055103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.055259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.055273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.055479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.748 [2024-12-10 12:41:31.055492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.748 qpair failed and we were unable to recover it. 00:38:24.748 [2024-12-10 12:41:31.055713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.055727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.055951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.055965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.056129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.056143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.056334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.056377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.056663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.056706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.057005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.057047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.057364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.057408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.057673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.057687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.057919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.057932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.058102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.058116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.058351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.058364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.058597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.058610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.058713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.058726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.058897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.058911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.059111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.059125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.059322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.059336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.059436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.059450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.059649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.059662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.059863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.059877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.059969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.059982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.060122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.060135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.060389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.060432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.749 qpair failed and we were unable to recover it. 00:38:24.749 [2024-12-10 12:41:31.060736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.749 [2024-12-10 12:41:31.060777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.061041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.061054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.061155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.061184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.061385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.061399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.061542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.061555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.061765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.061808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.062123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.062182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.062417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.062439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.062711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.062733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.062962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.062983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.063224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.063246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.063413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.063434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.063698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.063744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.063906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.063949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.064229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.064281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.064555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.064597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.064884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.064928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.065192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.065237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.065507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.065548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.065910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.065950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.750 qpair failed and we were unable to recover it. 00:38:24.750 [2024-12-10 12:41:31.066097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.750 [2024-12-10 12:41:31.066111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.066341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.066356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.066578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.066592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.066775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.066788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.066879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.066893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.067037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.067051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.067229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.067273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.067574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.067617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.067894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.067938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.068201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.068246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.068522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.068564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.068830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.068872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.069150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.069203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.069488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.069524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.069768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.069781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.069954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.069967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.070208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.070222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.070378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.070392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.070629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.070671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.070893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.070934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.071144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.071210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.071452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.751 [2024-12-10 12:41:31.071544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.751 qpair failed and we were unable to recover it. 00:38:24.751 [2024-12-10 12:41:31.071830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.071872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.072153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.072209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.072431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.072445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.072670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.072683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.072909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.072922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.073130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.073143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.073351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.073365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.073467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.073481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.073635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.073648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.073823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.073837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.074094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.074137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.074478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.074521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.074823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.074872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.075132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.075194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.075474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.075488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.075689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.075703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.075922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.075935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.076101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.076115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.076298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.076343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.076625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.076668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.076944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.076985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.077188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.077231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.077422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.077436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.077573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.077585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.077794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.077834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.078057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.752 [2024-12-10 12:41:31.078099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.752 qpair failed and we were unable to recover it. 00:38:24.752 [2024-12-10 12:41:31.078273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.078317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.078590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.078604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.078856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.078869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.079039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.079053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.079214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.079250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.079514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.079556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.079814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.079856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.080159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.080215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.080438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.080480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.080623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.080664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.080968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.081010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.081304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.081348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.081561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.081574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.081816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.081829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.081915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.081939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.082175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.082189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.082298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.082311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.082478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.082491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.082716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.082729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.082870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.082883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.083063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.083077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.083331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.083345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.083513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.083527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.083758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.753 [2024-12-10 12:41:31.083799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.753 qpair failed and we were unable to recover it. 00:38:24.753 [2024-12-10 12:41:31.083959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.084009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.084240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.084279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.084498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.084726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.084739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.084917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.084931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.085031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.085044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.085242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.085256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.085513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.085555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.085873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.085915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.086076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.086118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.086368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.086382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.086608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.086622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.086857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.086870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.087014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.087034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.087182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.087196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.087444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.087458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.087702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.087715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.087913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.087927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.088065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.088089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.088351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.088394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.088540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.088583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.754 qpair failed and we were unable to recover it. 00:38:24.754 [2024-12-10 12:41:31.088842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.754 [2024-12-10 12:41:31.088884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.089147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.089202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.089503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.089545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.089834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.089876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.090157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.090212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.090450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.090464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.090714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.090728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.090933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.090947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.091086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.091099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.091340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.091377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.091674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.091717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.091952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.091994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.092300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.092345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.092567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.092581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.755 [2024-12-10 12:41:31.092754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.755 [2024-12-10 12:41:31.092767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.755 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.093003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.093047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.093296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.093338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.093621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.093635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.093789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.093802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.094019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.094061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.094357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.094400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.094682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.094731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.094966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.095009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.095312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.095358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.095618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.095660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.095879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.095921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.096218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.096233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.096416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.096430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.096654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.096667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.096891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.096905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.097150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.097164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.097338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.097352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.097524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.097537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.097748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.097804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.098009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.098052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.098269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.098314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.098431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.098444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.098610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.098623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.098794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.098831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.099082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.099126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.756 qpair failed and we were unable to recover it. 00:38:24.756 [2024-12-10 12:41:31.099342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.756 [2024-12-10 12:41:31.099386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.099540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.099582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.099782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.099825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.100028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.100072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.100200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.100244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.100428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.100442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.100577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.100591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.100739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.100753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.100969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.100983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.101083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.101099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.101184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.101197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.101402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.101416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.101548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.101562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.101802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.101846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.102114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.102156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.102506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.102552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.102752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.102807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.103027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.103070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.103337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.103350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.103453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.103466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.103642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.103655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.103807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.103822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.103971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.103985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.104141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.104155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.104370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.104456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.104862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.104947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.105336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.105424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.757 qpair failed and we were unable to recover it. 00:38:24.757 [2024-12-10 12:41:31.105617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.757 [2024-12-10 12:41:31.105633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.105854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.105868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.106092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.106106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.106202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.106215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.106365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.106379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.106576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.106590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.106837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.106850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.107126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.107178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.107342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.107385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.107596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.107610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.107834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.107847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.107993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.108007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.108219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.108234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.108368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.108382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.108527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.108542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.108775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.108818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.109044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.109085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.109237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.109282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.109491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.109533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.109710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.109752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.110080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.110137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.110505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.110560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.110799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.110856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.111152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.111209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.111448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.111490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.111796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.111838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.758 qpair failed and we were unable to recover it. 00:38:24.758 [2024-12-10 12:41:31.112128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.758 [2024-12-10 12:41:31.112189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.112415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.112428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.112580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.112594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.112749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.112763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.113018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.113032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.113180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.113194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.113327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.113342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.113497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.113511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.113676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.113722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.114001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.114045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.114276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.114321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.114564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.114603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.114762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.114776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.114952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.114965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.115195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.115237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.115387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.115429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.115714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.115755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.116071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.116113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.116324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.116367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.116471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.116485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.116640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.116654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.116733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.116745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.116910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.116923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.117107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.117121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.117349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.759 [2024-12-10 12:41:31.117364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.759 qpair failed and we were unable to recover it. 00:38:24.759 [2024-12-10 12:41:31.117508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.117521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.117676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.117689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.117886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.117900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.117991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.118004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.118160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.118187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.118354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.118368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.118533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.118546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.118758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.118771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.118983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.119001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.119067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.119079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.119237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.119282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.119468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.119495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.119606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.119632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.119824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.119870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.120151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.120206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.120418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.120460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.120652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.120694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.120931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.120972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.121209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.121255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.121475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.121516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.121664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.121707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.122003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.122045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.122325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.122340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.122566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.122583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.122675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.122687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.122820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.122836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.123044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.123058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.123233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.123247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.123355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.123369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.123519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.123532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.760 [2024-12-10 12:41:31.123625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.760 [2024-12-10 12:41:31.123637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.760 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.123882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.123926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.124124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.124184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.124349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.124389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.124544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.124558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.124691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.124704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.124793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.124805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.124969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.124983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.125146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.125159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.125353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.125396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.125551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.125594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.125809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.125850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.126094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.126137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.126376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.126431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.126631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.126644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.126802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.126815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.127020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.127034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.127177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.127192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.127368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.127382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.127467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.127480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.127627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.127673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.127884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.127934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.128275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.128324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.128536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.128558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.128800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.128821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.128918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.128939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.129157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.129177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.129422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.129436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.129644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.129657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.129810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.129823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.129983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.129997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.130149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.130163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.130360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.130374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.130517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.130533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.130823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.130866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.131070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.131111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.131340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.131383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.131683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.131697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.131940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.131954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.761 qpair failed and we were unable to recover it. 00:38:24.761 [2024-12-10 12:41:31.132089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.761 [2024-12-10 12:41:31.132102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.132325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.132340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.132540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.132553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.132831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.132845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.133049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.133062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.133320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.133334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.133416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.133428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.133693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.133707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.133946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.133959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.134163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.134187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.134357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.134370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.134530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.134544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.134746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.134766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.134865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.134878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.135116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.135129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.135309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.135323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.135409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.135422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.135558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.135571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.135773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.135787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.135943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.135957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.136188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.136232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.136499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.136543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.136804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.136845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.137152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.137208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.137421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.137462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.137729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.137771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.138003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.138044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.138341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.138386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.138611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.138624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.138857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.138869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.139044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.139057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.139229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.139244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.139490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.139533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.139665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.139707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.139996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.140045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.140334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.140379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.140603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.140645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.140879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.140920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.141139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.141191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.141415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.141428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.141656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.141669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.141875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.141888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.142094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.142107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.142297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.142312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.142408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.142421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.142575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.142588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.142806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.142820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.142992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.143005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.143227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.143241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.143471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.143514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.143717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.143758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.144046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.144088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.144294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.144337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.144564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.144606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.144864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.144905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.145218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.145262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.762 [2024-12-10 12:41:31.145461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.762 [2024-12-10 12:41:31.145502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.762 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.145745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.145758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.145899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.145924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.146194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.146239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.146514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.146557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.146824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.146838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.146978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.146991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.147160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.147180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.147327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.147340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.147474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.147488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.147710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.147751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.147949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.147991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.148263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.148307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.148615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.148657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.148859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.148901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.149200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.149244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.149506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.149519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.149671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.149684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.149927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.150182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.150225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.150515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.150578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.150813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.150856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.151052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.151093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.151392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.151436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.151618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.151632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.151840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.151853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.151985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.151999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.152217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.152242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.152389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.152402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.152548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.152561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.152777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.152790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.153019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.153061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.153348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.153392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.153662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.153704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.153922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.153965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.154177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.154221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.154530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.154571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.154807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.154849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.154993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.155035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.155305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.155350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.155575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.155619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.155877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.155890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.156042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.156055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.156192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.156207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.156341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.156354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.156503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.156517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.156610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.156622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.156854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.156868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.157044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.157057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.157208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.157221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.157431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.157445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.157597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.157610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.157749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.157762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.157940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.157954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.158182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.158196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.158454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.158467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.158697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.158710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.763 [2024-12-10 12:41:31.158861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.763 [2024-12-10 12:41:31.158874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.763 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.159067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.159093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.159330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.159344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.159562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.159576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.159754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.159768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.159980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.159993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.160088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.160101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.160243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.160257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.160495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.160508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.160714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.160727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.160977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.160991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.161234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.161248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.161364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.161377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.161550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.161564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.161807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.161821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.161995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.162008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.162175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.162189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.162431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.162445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.162630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.162644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.162880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.162922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.163214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.163258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.163541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.163555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.163812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.163826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.164001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.164014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.164164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.164194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.164294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.164306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.164455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.164469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.164695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.164713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.164813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.164826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.164981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.164995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.165179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.165194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.165373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.165386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.165561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.165574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.165735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.165749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.165989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.166032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.166323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.166366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.166652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.166694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.166964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.167006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.167227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.167273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.167511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.167553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.167812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.167854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.168130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.168198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.168445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.764 [2024-12-10 12:41:31.168486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.764 qpair failed and we were unable to recover it. 00:38:24.764 [2024-12-10 12:41:31.168690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.168703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.168931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.168944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.169173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.169188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.169449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.169463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.169690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.169704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.169932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.169946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.170200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.170214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.170447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.170462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.170691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.170705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.170917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.170930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.171100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.171114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.171352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.171394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.171686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.171730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.171942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.171984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.172271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.172315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.172559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.172601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.172898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.172940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.173218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.173262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.173467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.173509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.173712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.173756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.173970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.174011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.174214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.174259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.174475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.174525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.174697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.174710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.174880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.174894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.175136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.175195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.175409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.175450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.175655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.175698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.175970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.176012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.176248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.176291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.176572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.176614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.176889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.176902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.177163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.177337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.177351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.177562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.177605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.177908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.177950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.178231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.178275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.178472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.178486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.178556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.178569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.178807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.178821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.179033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.179046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.179239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.179253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.179339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.179352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.179564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.179578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.179752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.179765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.180003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.180044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.180307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.180350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.180550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.180564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.180725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.180748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.180983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.181025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.181309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.181353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.181585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.181599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.181825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.181838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.182039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.182053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.182255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.182270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.182415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.182429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.182600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.182614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.182840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.182854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.183060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.183074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.765 [2024-12-10 12:41:31.183280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.765 [2024-12-10 12:41:31.183324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.765 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.183608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.183649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.183772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.183802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.183944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.183957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.184209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.184224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.184402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.184431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.184624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.184672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.184941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.184983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.185128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.185181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.185408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.185450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.185725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.185739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.185915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.185928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.186015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.186028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.186245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.186259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.186456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.186469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.186620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.186634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.186872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.186886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.187039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.187052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.187150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.187164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.187338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.187352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.187588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.187603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.187756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.187769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.187917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.187930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.188109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.188123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.188211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.188224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.188381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.188395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.188617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.188630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.188860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.188873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.189111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.189125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.189376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.189390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.189570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.189583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.189730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.189776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.190067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.190109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.190445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.190489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.190756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.190799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.191069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.191110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.191394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.191439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.191647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.191661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.191818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.191832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.192010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.192061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.192277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.192323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.192589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.192631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.192808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.192821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.193024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.193038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.193192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.193206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.193365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.193379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.193527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.193543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.193773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.193787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.194021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.194036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.194193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.194207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.194465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.194507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.194714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.194756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.195018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.195081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.195362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.766 [2024-12-10 12:41:31.195417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.766 qpair failed and we were unable to recover it. 00:38:24.766 [2024-12-10 12:41:31.195718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.195759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.195953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.195995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.196213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.196257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.196542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.196584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.196819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.196832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.197070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.197084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.197254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.197268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.197425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.197438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.197575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.197589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.197738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.197752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.197926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.197973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.198233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.198277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.198559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.198601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.198906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.198948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.199243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.199288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.199525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.199538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.199754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.199768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.199916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.199930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.200075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.200088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.200318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.200333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.200524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.200538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.200783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.200797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.201016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.201030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.201114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.201127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.201357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.201372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.201509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.201522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.201694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.201707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.201937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.201979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.202189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.202233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.202522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.202565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.202832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.202874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.203102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.203144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.203441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.203491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.203720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.203733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.203912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.203925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.204084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.204098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.204273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.204287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.204495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.204509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.204743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.204757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.204980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.204993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.205253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.205267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.205420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.205433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.205612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.205654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.205945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.205987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.206196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.206240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.206525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.206567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.206775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.206818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.207097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.207139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.207380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.207422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.207637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.207680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.207944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.207957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.208063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.208079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.208291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.208306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.208567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.208581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.767 [2024-12-10 12:41:31.208801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.767 [2024-12-10 12:41:31.208814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.767 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.208968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.208981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.209229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.209243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.209468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.209481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.209658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.209672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.209825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.209838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.210104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.210146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.210350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.210394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.210646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.210660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.210887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.210905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.211130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.211143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.211389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.211433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.211718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.211762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.212071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.212113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.212366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.212412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.212586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.212599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.212809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.212822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.212991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.213004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.213137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.213153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.213300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.213314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.213549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.213563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.213708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.213721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.213959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.213972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.214208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.214253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.214544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.214586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.214874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.214887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.215020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.215033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.215188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.215218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.215451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.215465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.215644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.215658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.215906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.215919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.216066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.216079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.216263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.216289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.216464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.216477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.216704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.216745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.217011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.217053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.217268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.217313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.217621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.217663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.217925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.217938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.218162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.218180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.218349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.218363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.218538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.218550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.218754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.218768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.219010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.219024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.219096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.219109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.219338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.219353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.219583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.219597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.219754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.219768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.219997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.220010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.220252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.220296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.220580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.220623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.220921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.220934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.221206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.221220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.221373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.221387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.221627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.768 [2024-12-10 12:41:31.221669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.768 qpair failed and we were unable to recover it. 00:38:24.768 [2024-12-10 12:41:31.221862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.221905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.222198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.222243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.222449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.222491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.222750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.222799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.223098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.223140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.223452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.223497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.223728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.223768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.223954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.223968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.224134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.224147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.224346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.224391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.224706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.224767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.224998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.225041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.225271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.225315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.225640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.225653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.225880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.225893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.225997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.226011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.226278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.226314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.226466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.226510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.226809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.226863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.227069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.227111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.227433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.227477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.227731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.227745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.227900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.227913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.228193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.228237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.228488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.228530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.228827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.228869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.229077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.229119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.229452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.229497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.229783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.229825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.230069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.230112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.230415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.230460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.230617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.230659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.230908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.230921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.231209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.231224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.231417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.231431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.231660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.231674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.231908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.231921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.232102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.232115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.232335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.232349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.232609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.232623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.232864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.232906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.233137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.233191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.233488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.233531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.233785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.233833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.234114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.234157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.234468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.234512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.234838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.234880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.235087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.235129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.235422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.235490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.235680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.235705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.235941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.235965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.236113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.236135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.236366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.236388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.236653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.236697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.236960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.237002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.237268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.237313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.237558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.237578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.237868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.237891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.238110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.238131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.238304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.238326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.238548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.238569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.769 [2024-12-10 12:41:31.238738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.769 [2024-12-10 12:41:31.238760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.769 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.239010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.239052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.239317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.239360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.239630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.239651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.239924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.239945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.240216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.240238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.240497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.240518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.240764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.240785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.241037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.241058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.241276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.241299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.241497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.241519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.241764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.241785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.242014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.242036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.242207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.242229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.242399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.242447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.242714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.242756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.243033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.243076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.243363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.243406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.243643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.243686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.243979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.244021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.244204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.244249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.244456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.244500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.244702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.244726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.244969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.244990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.245174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.245197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.245365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.245415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.245645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.245686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.245992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.246034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.246329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.246375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.246622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.246664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.246954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.246996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.247267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.247313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.247583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.247626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.247855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.247898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.248190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.248234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.248431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.248487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.248800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.248822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.249052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.249074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.249316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.249338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.249510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.249531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.249715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.249758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.250041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.250083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.250349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.250393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.250672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.250713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.250949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.250992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.251201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.251245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.251529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.251585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.251822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.251843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.252116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.252189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.252405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:38:24.770 [2024-12-10 12:41:31.252735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.252796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.253105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.253152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.253460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.253505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.253714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.770 [2024-12-10 12:41:31.253735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.770 qpair failed and we were unable to recover it. 00:38:24.770 [2024-12-10 12:41:31.253958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.253979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.254128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.254149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.254355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.254377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.254576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.254627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.254926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.254969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.255132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.255187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.255475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.255517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.255723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.255775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.256044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.256066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.256234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.256257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.256441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.256483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.256686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.256728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.257049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.257092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.257313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.257357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.257589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.257644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.257830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.257850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.258097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.258118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.258228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.258249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.258418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.258439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.258694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.258715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.258966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.258987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.259182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.259204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.259327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.259351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.259511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.259532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.259650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.259671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.259848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.259868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.260086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.260106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.260284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.260306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.260537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.260559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.260664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.260685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.260881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.260902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.261138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.261159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.261332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.261354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.261543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.261564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.261730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.261751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.261918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.261939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.262128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.262179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.262396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.262439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.262590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.262631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.262779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.262799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.263044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.263088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.263399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.263456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.263594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.263636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.263910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.263931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.264101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.264123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.264300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.264322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.264507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.264547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.264807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.264849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.265154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.265237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.265545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.265587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.265855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.265898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.771 qpair failed and we were unable to recover it. 00:38:24.771 [2024-12-10 12:41:31.266202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.771 [2024-12-10 12:41:31.266245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.266474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.266516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.266830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.266872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.267156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.267208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.267354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.267395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.267673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.267717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.267995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.268016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.268164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.268198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.268444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.268466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.268669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.268690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.268908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.268930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.269186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.269212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.269376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.269396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.269580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.269602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.269855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.269876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.270052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.270073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.270314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.270336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.270597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.270618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.270784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.270805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.271024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.271046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.271283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.271304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.271479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.271501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.271749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.271770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.271857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.271876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.272040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.272061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.272314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.272336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.272591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.272612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.272858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.272879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.273048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.273070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.273241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.273264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.273490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.273627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.273648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.273820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.273841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.274009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.274030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.274254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.274276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.274449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.274469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.274655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.274676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.274913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.274934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.275186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.275208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.275447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.275467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.275705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.275725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.275971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.275996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.276235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.276259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.276493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.276514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.276681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.276701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.276975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.276996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.277158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.277188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.277362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.277383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.277552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.277573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.277743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.277764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.277886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.277907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.278072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.278097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.278267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.278289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.278533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.278555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.278751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.278778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.279029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.279051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.279270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.279292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.279495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.279516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.279793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.279814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.772 [2024-12-10 12:41:31.279908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.772 [2024-12-10 12:41:31.279928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.772 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.280102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.280122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.280371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.280393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.280572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.280592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.280756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.280777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.280896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.280916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.281145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.281172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.281374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.281395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.281579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.281600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.281818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.281838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.282029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.282050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.282202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.282225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.282324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.282344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.282566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.282587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.282697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.282717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.283043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.283067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.283242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.283265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.283507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.283528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.283699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.283721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.283955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.283975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.284163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.284205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.284358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.284378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.284629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.284650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.284892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.284913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.285176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.285198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.285348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.285370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.285622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.285644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.285864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.285885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.286076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.286097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.286296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.286318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.286568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.286589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.286765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.286785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.287031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.287056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.287225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.287247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.287501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.287536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.287766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.287783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.287989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.288004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.288233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.288247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.288474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.288488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.288709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.288723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.288951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.288965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.289185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.289200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.289376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.289389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.289550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.289564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.289744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.289757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.289969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.289983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.290141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.290155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.290310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.290324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.290579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.290593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.290682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.290694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.290786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.290798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.290955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.290969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.291104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.291118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.291320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.291338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.291489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.291503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.291732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.291746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.291973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.291987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.773 qpair failed and we were unable to recover it. 00:38:24.773 [2024-12-10 12:41:31.292219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.773 [2024-12-10 12:41:31.292234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.292394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.292408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.292505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.292518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.292673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.292687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.292791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.292804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.293014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.293028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.293265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.293280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.293549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.293562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.293769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.293782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.293959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.293972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.294164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.294182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.294391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.294410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.294571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.294585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.294804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.294818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.295000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.295014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.295173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.295190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.295341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.295354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.295570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.295583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.295745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.295759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.295960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.295973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.296138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.296151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.296389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.296403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.296611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.296625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.296861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.296874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.297149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.297163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.297357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.297371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.297624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.297638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.297901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.297915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.298116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.298129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.298313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.298327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.298513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.298527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.298769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.774 [2024-12-10 12:41:31.298783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.774 qpair failed and we were unable to recover it. 00:38:24.774 [2024-12-10 12:41:31.298999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.299012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.299115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.299129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.299224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.299238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.299407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.299420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.299647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.299662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.299836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.299850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.300006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.300020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.300115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.300128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.300306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.300320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.300526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.300539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.300748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.300762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.300992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.301005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.301238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.301252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.301456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.301470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.301570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.301583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.301813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.301826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.301981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.301994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.302219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.302234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.302372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.302386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.302475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.302487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.302643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.302656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.302881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.302895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.303046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.303061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.303263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.303282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.303432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.303445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.303686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.303700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.303953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.303967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.304173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.304187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.304394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.304408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.304558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.304572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.304799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.304813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.304912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.304924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.305064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.305077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.305296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.305310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.305557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.305571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.305744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.305757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.305959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.305972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.306178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.306192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.306437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.306451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.306619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.306632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.306778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.306791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.307014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.307028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.307257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.307272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.307449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.307483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.307637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.307651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.307881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.307895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.308104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.308118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.308347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.308361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.308518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.308531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.308694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.308708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.308889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.308903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.309064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.309077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.309321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.309335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.309510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.309524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.309677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.309691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.309897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.775 [2024-12-10 12:41:31.309910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.775 qpair failed and we were unable to recover it. 00:38:24.775 [2024-12-10 12:41:31.310134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.310147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.310376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.310391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.310656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.310669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.310897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.310910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.311113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.311126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.311274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.311288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.311433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.311447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.311649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.311665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.311865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.311878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.312132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.312146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.312466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.312511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.312651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.312697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.312904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.312930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.313172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.313188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.313415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.313429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.313579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.313593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.313797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.313810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.314047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.314061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.314337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.314351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.314558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.314571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.314719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.314732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.314890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.314905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.315152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.315170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.315329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.315342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.315616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.315630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.315853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.315866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.316017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.316031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.316211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.316225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.316430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.316443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.316670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.316683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.316884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.316897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.317123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.317136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.317412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.317427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.317640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.317654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.317738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.317751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.317966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.317979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.318201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.318216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.318476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.318490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.318637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.318650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.318825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.318842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.319043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.319057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.319211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.319225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.319305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.319318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.319544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.319557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.319810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.319824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.319991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.320005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.320231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.320246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.320466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.320482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.320744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.320758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.320916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.320930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.776 qpair failed and we were unable to recover it. 00:38:24.776 [2024-12-10 12:41:31.321097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.776 [2024-12-10 12:41:31.321111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.321323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.321337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.321570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.321583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.321853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.321867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.321955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.321972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.322199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.322213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.322477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.322490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.322593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.322606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.322824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.322838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.323003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.323016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.323184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.323198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.323377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.323391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.323603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.323616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.323798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.323812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.324039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.324052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.324300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.324314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.324412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.324425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.324633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.324647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.324781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.324794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.325021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.325035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.325192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.325207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.325366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.325379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.325536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.325549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.325774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.325788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.326087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.326114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.326311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.326341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.326631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.326656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.326881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.326896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.327052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.327065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.327291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.327306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.327511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.327524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.327679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.327693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.327769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.327781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.327953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.327966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.328176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.328190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.328418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.328432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.328583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.328596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.777 qpair failed and we were unable to recover it. 00:38:24.777 [2024-12-10 12:41:31.328694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.777 [2024-12-10 12:41:31.328710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.328930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.328943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.329094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.329107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.329342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.329357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.329558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.329572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.329798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.329811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.330052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.330065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.330231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.330244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.330483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.330497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.330714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.330727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.330874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.330887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.331030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.331043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.331252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.331266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.331497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.331511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.331741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.331755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.331909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.331923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.332142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.332155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.332259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.332272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.332428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.332440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.332532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.332544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.332771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.332785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.332919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.332933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.333150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.333163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.333381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.333395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.333554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.333568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.333779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.333794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.333941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.333956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.334119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.334145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.334427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.334453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.334708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.334736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.334849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.334864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.335067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.335082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.335229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.335244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.335475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.335490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.335710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.335726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.335878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.335893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.335963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.335977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.336206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.336226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.336371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.336386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.336532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.336547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.336756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.336774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.336999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.337015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.337214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.337230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.337384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.337399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.337495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.337509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.337588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.337601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.337705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.337719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.337873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.337888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.338093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.338108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.338248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.338262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.338515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.338530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.338779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.338794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.339005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.339020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.339223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.339239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.339466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.339481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.339704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.778 [2024-12-10 12:41:31.339720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.778 qpair failed and we were unable to recover it. 00:38:24.778 [2024-12-10 12:41:31.339980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.339995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.340142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.340157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.340413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.340429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.340637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.340652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.340783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.340798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.341036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.341051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.341203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.341219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.341316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.341329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.341531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.341546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.341697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.341712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.341964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.341979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.342198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.342223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.342502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.342527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.342635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.342659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.342755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.342770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.342997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.343013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.343093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.343107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.343282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.343297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.343470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.343485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.343588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.343601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.343744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.343759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.343990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.344005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.344215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.344230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.344474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.344489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.344719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.344737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.344941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.344956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.345227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.345244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.345445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.345460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.345659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.345675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.345843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.345859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.346007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.346022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.346157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.346179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.346327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.346346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.346490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.346505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.346732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.346747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.346968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.346983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.347179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.347195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.347376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.347392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.347595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.347610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.347840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.347856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.348084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.348099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.348251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.348266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.348469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.348484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.348709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.348724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.348875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.348889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.349094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.349109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.349256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.349272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.349499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.349516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.349700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.349719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.349958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.349972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.350122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.350137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.350330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.350355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.350566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.350589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.350809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.350832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.779 [2024-12-10 12:41:31.351060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.779 [2024-12-10 12:41:31.351083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.779 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.351179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.351201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.351382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.351406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.351605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.351622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.351777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.351793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.351957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.351972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.352057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.352070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.352309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.352325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.352527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.352542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.352702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.352716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.352936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.352954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.353095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.353110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.353337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.353352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.353581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.353596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.353800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.353815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.354005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.354020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.354204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.354220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.354365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.354380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.354580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.354596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.354786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.354801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.355040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.355056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.355230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.355246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.355414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.355429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.355523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.355537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.355701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.355716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.355857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.355871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.356071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.356086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.356241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.356257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.356484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.356500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.356676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.356691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.356770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.356783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.356999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.357014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.357185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.357200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.357425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.357440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.357662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.357676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.357941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.357956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.358159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.358182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.358329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.358346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.358489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.358505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.358584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.358598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.358732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.358748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.358973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.358989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.359153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.359173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.359395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.359410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.359616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.359638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.359884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.359899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.360031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.360047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.360269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.360284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.360378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.360392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.360536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.360551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.360703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.360718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.360866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.360882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.361040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.361055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.361258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.780 [2024-12-10 12:41:31.361273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.780 qpair failed and we were unable to recover it. 00:38:24.780 [2024-12-10 12:41:31.361475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.361489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.361698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.361713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.361879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.361894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.362118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.362133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.362277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.362293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.362504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.362518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.362676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.362691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.362774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.362788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.362988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.363003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.363152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.363172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.363328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.363349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.363491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.363506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.363707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.363722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.363817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.363830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.363987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.364002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.364155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.364175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.364344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.364359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.364489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.364504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.364685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.364700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.364921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.364935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.365186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.365202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.365342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.365356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.365455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.365468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.365680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.365697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.365846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.365861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.366035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.366051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.366121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.366135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.366337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.366353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.366431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.366445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.366537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.366550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.366646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.366660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.366878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.366893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.367077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.367092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.367247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.367262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.367335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.367348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.367503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.367517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.367677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.367692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.367855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.367870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.367951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.367965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.368117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.368131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.368362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.368378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.368607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.368623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.368875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.368890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.369058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.369073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.369294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.369311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.781 [2024-12-10 12:41:31.369485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.781 [2024-12-10 12:41:31.369500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.781 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.369690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.369705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.369920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.369936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.370114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.370129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.370292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.370308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.370542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.370557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.370790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.370805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.371006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.371020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.371250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.371269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.371496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.371511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.371738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.371753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.371955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.371970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.372199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.372214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.372447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.372466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.372696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.372710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.372851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.372865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.373004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.373019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.373247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.373262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.373477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.373494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.373579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.373593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.373725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.373739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.373904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.373919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.374120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.374135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.374351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.374366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.374521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.374535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.374746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.374761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.374981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.374996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.375159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.375185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.375413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.375428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.375648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.375668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.375927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.375941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.376195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.376210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.376366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.376381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.376551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.376566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.376768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.376782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.376951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.376966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.377109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.377124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.377326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.377341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.377453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.377467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.377669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.377685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.377885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.377899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.378146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.378161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.378341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.378356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.378609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.378624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.378788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.378803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.378880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.378895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.379043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.379057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.379217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.379233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.379441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.379455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.379607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.379623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.379837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.379852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.379998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.380013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.380160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.380185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.380350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.380366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.380616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.782 [2024-12-10 12:41:31.380631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.782 qpair failed and we were unable to recover it. 00:38:24.782 [2024-12-10 12:41:31.380786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.380801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.380961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.380976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.381129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.381144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.381350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.381368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.381446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.381460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.381602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.381617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.381844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.381859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.382013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.382027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.382257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.382272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.382517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.382532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.382766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.382780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.382930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.382944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.383124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.383139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.383289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.383304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.383454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.383469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.383624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.383639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.383723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.383737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.383875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.383890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.384042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.384058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.384211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.384226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.384452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.384467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.384701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.384716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.384961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.384975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.385131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.385146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.385323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.385339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.385530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.385545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.385745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.385760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.385959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.385974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.386116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.386130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.386333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.386348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.386451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.386465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.386605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.386619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.386748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.386763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.386917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.386932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.387134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.387148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.387285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.387301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.387401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.387415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.387621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.387636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.387770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.387794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.388021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.388036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.388237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.388252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.388478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.388494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.388665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.388680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.388826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.388844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.389024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.389038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.389179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.389195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.389441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.389455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.389603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.389619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.389762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.389777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.389991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.390005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.390073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.390087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.390315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.390330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.390482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.390503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.390668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.390683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.390882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.390897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.390991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.391006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.391138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.391153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.391313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.783 [2024-12-10 12:41:31.391327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.783 qpair failed and we were unable to recover it. 00:38:24.783 [2024-12-10 12:41:31.391558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.391573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.391797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.391812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.391970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.391985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.392185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.392200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.392333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.392348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.392498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.392512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.392737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.392752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.392819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.392832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.392993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.393008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.393236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.393251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.393397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.393412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.393620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.393636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.393773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.393788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.393864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.393877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.394026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.394042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.394177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.394193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.394414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.394429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.394677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.394692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.394825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.394840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.394909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.394923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.395152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.395171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.395311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.395327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.395416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.395429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.395630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.395645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.395808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.395822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.396027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.396044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.396277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.396293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.396444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.396459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.396608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.784 [2024-12-10 12:41:31.396623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.784 qpair failed and we were unable to recover it. 00:38:24.784 [2024-12-10 12:41:31.396702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.396716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.396914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.396929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.397084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.397099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.397348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.397363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.397588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.397603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.397766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.397781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.398014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.398029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.398131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.398145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.398316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.398331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.398483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.398498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.398718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.398734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.398925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.398940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.399202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.399218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.399357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.399372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.399554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.399570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.399790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.399814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.399961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.399977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.400185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.400201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.400418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.400433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.400659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.400674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.400754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.400768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.400969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.400984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.401229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.401245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.401504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.401519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.401694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.401709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.401883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.401897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.401981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.401995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.402211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.402226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.785 qpair failed and we were unable to recover it. 00:38:24.785 [2024-12-10 12:41:31.402442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.785 [2024-12-10 12:41:31.402456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.402601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.402615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.402855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.402870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.403064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.403078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.403292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.403308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.403541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.403556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.403709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.403724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.403957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.403972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.404218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.404236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.404373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.404388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.404619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.404633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.404726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.404739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.404954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.404969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.405192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.405208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.405374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.405389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.405520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.405535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.405695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.405710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.405880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.405894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.406139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.406154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.406325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.406351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.406513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.406534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.406765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.406787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.407053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.407069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.407273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.407288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.407443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.407458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.407686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.407702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.407854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.407868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.408042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.408057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.408302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.408317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.408471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.408487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.408720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.408735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.408967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.408983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.409234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.409250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.409406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.409422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.409647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.409662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.409834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.409849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.410088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.410103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.410302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.410317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.410519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.410534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.410617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.410630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.410793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.410807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.410884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.410898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.411081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.411096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.411260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.786 [2024-12-10 12:41:31.411275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.786 qpair failed and we were unable to recover it. 00:38:24.786 [2024-12-10 12:41:31.411502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.411518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.411690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.411705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.411930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.411946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.412079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.412095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.412262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.412280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.412431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.412447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.412597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.412613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.412831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.412846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.412933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.412947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.413154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.413173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.413373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.413397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.413602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.413617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.413693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.413707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.413851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.413866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.414003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.414018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.414235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.414250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.414335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.414349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.414596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.414611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.414833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.414848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.414998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.415013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.415161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.415181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.415381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.415396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.415487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.415501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.415647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.787 [2024-12-10 12:41:31.415661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.787 qpair failed and we were unable to recover it. 00:38:24.787 [2024-12-10 12:41:31.415806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.415820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.416023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.416039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.416182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.416198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.416361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.416376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.416539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.416554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.416755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.416771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.416982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.416997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.417225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.417242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.417418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.417434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.417602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.417617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.417850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.417865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.418120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.418135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.418362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.418377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.418601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.418615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.418768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.418783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.418872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.418886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.418967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.418981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.419227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.419243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.419448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.419463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.419618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.419633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.419848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.419866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.420023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.420037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.420184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.420199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.420429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.420444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.788 [2024-12-10 12:41:31.420554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.788 [2024-12-10 12:41:31.420568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.788 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.420739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.420753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.420894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.420909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.421064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.421080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.421212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.421228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.421446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.421461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.421621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.421636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.421808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.421823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.422054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.422069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.422214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.422230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.422389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.422404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.422629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.422644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.422822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.422837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.423016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.423031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.423258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.423273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.423523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.423538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.423708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.423723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.423962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.423981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.424133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.424148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.424366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.424382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.424586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.424601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.424777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.424792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.425031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.425046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.425212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.425227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.425402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.789 [2024-12-10 12:41:31.425417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.789 qpair failed and we were unable to recover it. 00:38:24.789 [2024-12-10 12:41:31.425579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.425595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.425760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.425780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.426024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.426039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.426136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.426150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.426307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.426322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.426465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.426480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.426621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.426635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.426839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.426853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.427102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.427117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.427277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.427292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.427492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.427507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.427660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.427677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.427921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.427936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.428114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.428129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.428214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.428228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.790 [2024-12-10 12:41:31.428456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.790 [2024-12-10 12:41:31.428471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.790 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.428682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.428697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.428841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.428856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.429024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.429039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.429260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.429276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.429481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.429496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.429638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.429654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.429881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.429895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.429965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.429979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.430230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.430245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.430499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.430514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.430666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.430681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.430826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.430841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.430987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.431003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.431156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.431175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.431422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.431437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.431686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.431700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.431848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.431862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.432067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.432082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.432236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.432251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.432435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.432451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.432625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.432639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.432868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.432883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.433061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.791 [2024-12-10 12:41:31.433076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.791 qpair failed and we were unable to recover it. 00:38:24.791 [2024-12-10 12:41:31.433184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.433199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.433391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.433406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.433574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.433588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.433834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.433850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.434020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.434035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.434129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.434142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.434372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.434388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.434640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.434655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.434806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.434821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.434978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.434994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.435197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.435212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.435373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.435387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.435624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.435642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.435868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.435883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.436037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.436052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.436262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.436277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.436357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.436370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.436524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.436539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.436676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.436691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.436940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.436955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.437180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.437195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.437307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.437322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.437451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.437466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.437610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.792 [2024-12-10 12:41:31.437626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.792 qpair failed and we were unable to recover it. 00:38:24.792 [2024-12-10 12:41:31.437694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.437707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.437845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.437860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.438083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.438103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.438191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.438205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.438435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.438450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.438540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.438553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.438638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.438651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.438860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.438875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.439074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.439089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.439234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.439249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.439449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.439465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.439612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.439628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.439778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.439793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.439939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.439954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.440102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.440118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.440344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.440375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.440546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.440572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.440816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.440838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.441014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.441030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.441275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.441290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.441443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.441458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.441607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.441621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.441844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.441859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.442006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.442022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.442213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.793 [2024-12-10 12:41:31.442228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.793 qpair failed and we were unable to recover it. 00:38:24.793 [2024-12-10 12:41:31.442412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.442428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.442575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.442591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.442739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.442754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.442981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.443004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.443205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.443225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.443489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.443503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.443648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.443662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.443805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.443819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.443911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.443925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.444094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.444109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.444398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.444413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.444518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.444532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.444764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.444779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.444928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.444943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.445164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.445183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.445316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.445331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.445432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.445445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.445671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.445686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.445844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.445860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.445997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.446013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.446245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.446260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.446350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.446364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.446515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.446529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.446666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-12-10 12:41:31.446680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.794 qpair failed and we were unable to recover it. 00:38:24.794 [2024-12-10 12:41:31.446824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.446839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.447094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.447109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.447309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.447324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.447502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.447518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.447739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.447753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.447900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.447915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.448026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.448051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.448200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.448224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.448454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.448477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.448721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.448738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.448954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.448969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.449210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.449226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.449371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.449386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.449570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.449584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.449680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.449694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.449941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.449956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.450098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.450113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.450265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.450280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.450364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.450378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.450597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.450615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.450757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.450772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.450998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.451013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.451266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.451282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.451429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.451449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.451687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.451702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.451782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.451795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.452019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.795 [2024-12-10 12:41:31.452035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.795 qpair failed and we were unable to recover it. 00:38:24.795 [2024-12-10 12:41:31.452134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.452148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.452292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.452307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.452484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.452499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.452631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.452646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.452817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.452832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.452979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.452993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.453068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.453083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.453308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.453323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.453480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.453494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.453640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.453655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.453742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.453755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.453900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.453915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.454077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.454092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.454230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.454245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.454498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.454513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.454650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.454665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.454801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.454816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.455016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.455031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.455208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.455223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.455436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.455451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.455627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.455642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.455804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.455817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.456039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.456054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.456199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.456215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.456451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.456466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.456672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.456686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.456821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.456835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.457049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.457064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.457336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.457351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.457451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.796 [2024-12-10 12:41:31.457466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.796 qpair failed and we were unable to recover it. 00:38:24.796 [2024-12-10 12:41:31.457542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.457556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.457719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.457734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.457865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.457881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.458097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.458113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.458286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.458301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.458392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.458406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.458637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.458652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.458791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.458807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.458948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.458963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.459189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.459204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.459426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.459442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.459576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.459591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.459765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.459780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.459939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.459954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.460136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.460152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.460389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.460404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.460637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.460653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.460880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.460895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.460981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.797 [2024-12-10 12:41:31.460996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.797 qpair failed and we were unable to recover it. 00:38:24.797 [2024-12-10 12:41:31.461221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.461237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.461323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.461337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.461590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.461605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.461690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.461704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.461880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.461895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.462120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.462135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.462337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.462353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.462562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.462577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.462718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.462733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.462958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.462972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.463127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.463145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.463223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.463242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.463376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.463391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.463531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.463547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.463769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.463914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.463929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.464000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.464014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.464215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.464231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.464504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.464518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.464651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.464665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.464887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.464901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.465047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.465062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.465309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.465325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.465399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.465413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.465664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.465679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.465820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.465835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.465915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.465929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.466020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.466033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.466185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.466200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.466345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.798 [2024-12-10 12:41:31.466360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.798 qpair failed and we were unable to recover it. 00:38:24.798 [2024-12-10 12:41:31.466525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.466540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.466704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.466720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.466969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.466988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.467150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.467165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.467370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.467385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.467547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.467562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.467764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.467778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.467928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.467943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.468144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.468159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.468407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.468423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.468621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.468636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.468837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.468851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.469070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.469085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.469226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.469241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.469337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.469351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.469553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.469568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.469666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.469680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.469814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.469829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.470002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.470016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.470163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.470184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.470451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.470469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.470700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.470716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.470944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.470959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.471161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.471181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.471410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.471425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.471641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.471656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.471814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.471829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.472049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.472064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.799 qpair failed and we were unable to recover it. 00:38:24.799 [2024-12-10 12:41:31.472270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.799 [2024-12-10 12:41:31.472286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.472514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.472529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.472634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.472649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.472808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.472823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.472903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.472917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.473054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.473069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.473224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.473239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.473389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.473404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.473585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.473600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.473810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.473825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.473967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.473982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.474162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.474189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.474327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.474342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.474565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.474580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.474791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.474806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.475007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.475023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.475295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.475312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.475490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.475509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.475736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.475750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.475954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.475970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.476189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.476205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.476429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.476444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.476713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.476729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.476873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.476888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.477034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.477048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.477202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.477217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.477439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.477454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.477639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.477654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.477811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.477825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.478060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.478075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.478278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.478293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.478569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.478583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.478807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.478825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.478977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.478991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.479210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.479225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.479361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.479376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.479603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.479617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.479863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.479878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.480105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.480121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.480274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.480289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.480427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.480442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.480670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.480685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.480918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.480933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.481106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.481120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.481323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.481338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.481490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.481511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.481749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.481764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.481988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.482003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.482137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.482151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.482303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.482319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.800 qpair failed and we were unable to recover it. 00:38:24.800 [2024-12-10 12:41:31.482487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.800 [2024-12-10 12:41:31.482501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.482651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.482666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.482830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.482844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.483005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.483020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.483177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.483191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.483353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.483368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.483457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.483471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.483628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.483642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.483802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.483818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.484044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.484059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.484199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.484215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.484380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.484395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.484565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.484580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.484779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.484793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.484889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.484903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.485033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.485049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.485203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.485218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.485367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.485382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.485643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.485658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.485745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.485759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.485935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.485949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.486184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.486199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.486431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.486449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.486675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.486690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.486835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.486850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.486994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.487009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.487186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.487201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.487361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.487375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.487529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.487544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.487766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.487781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.487949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.487967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.488143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.488162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.488371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.488387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.488587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.488602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.488863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.488879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.489107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.489122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.489326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.489342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.489477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.489492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.489634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.489648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.489891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.489906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.490133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.490148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.490293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.490308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.490549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.490767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.490782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.490935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.490950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.491086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.491101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.801 [2024-12-10 12:41:31.491311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.801 [2024-12-10 12:41:31.491327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.801 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.491419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.491432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.491522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.491536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.491609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.491623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.491826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.491840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.492075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.492090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.492322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.492337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.492436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.492450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.492665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.492680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.492891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.492906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.493126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.493141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.493304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.493321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.493541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.493557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.493807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.493822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.493977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.493991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.494151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.494171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.494343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.494361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.494558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.494573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.494723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.494738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.494910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.494924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.495179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.495195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.495352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.495367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.495532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.495548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.495698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.495713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.495934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.495948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.496164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.496184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.496272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.496286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.496508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.496523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.496653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.496668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.496815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.496830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.496911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.496925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.497071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.497089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.497292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.497308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.497468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.497482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.497694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.497709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.497958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.497974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.498180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.498196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.498401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.498416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.498550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.498565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.498791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.498806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.499069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.499084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.499230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.499246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.499394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.499409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.499492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.499506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.499731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.499746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.499969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.499984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.500232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.500248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.500472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.500486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.500654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.500670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.500893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.500914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.501083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.501098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.501192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.501206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.501363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.501377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.501515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.501530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.802 [2024-12-10 12:41:31.501700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.802 [2024-12-10 12:41:31.501716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.802 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.501886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.501901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.502069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.502087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.502245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.502260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.502429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.502444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.502614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.502629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.502713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.502727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.502952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.502966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.503110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.503125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.503275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.503291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.503430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.503445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.503606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.503621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.503851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.503866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.503947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.503960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.504109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.504123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.504363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.504378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.504582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.504597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.504754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.504769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.504986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.505001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.505148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.505163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.505305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.505320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.505532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.505547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.505798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.505813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.506061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.506076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.506291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.506306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.506476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.506491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.506629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.506644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.506793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.506808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.506941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.506957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.507138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.507153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.507313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.507339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.507504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.507531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.507830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.507853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.508119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.508141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.508323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.508346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.508506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.508530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.508775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.508792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.509040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.509055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.509209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.509226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.509308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.509321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.509545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.509561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.509653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.509666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.509929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.509947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.510037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.510050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.510220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.510235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.510371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.510386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.510592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.510607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.510783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.510797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.510946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.510961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.511122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.511137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.511350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.511365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.511512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.511527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.511671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.511686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.511882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.511897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.512120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.512136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.512297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.512313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.512452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.512468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.512554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.512568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.512818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.512832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.513032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.513048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.513295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.513311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.513480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.513495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.513652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.513667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.513865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.513879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.514022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.514037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.803 [2024-12-10 12:41:31.514216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.803 [2024-12-10 12:41:31.514250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.803 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.514397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.514412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.514559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.514574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.514734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.514749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.514936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.514960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.515179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.515204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.515457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.515479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.515584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.515604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.515765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.515787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.516018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.516040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.516271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.516288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.516498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.516514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.516675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.516689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.516930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.516945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.517029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.517042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.517177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.517192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.517294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.517309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.517550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.517567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.517700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.517716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.517814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.517828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.518040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.518055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.518200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.518216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.518415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.518430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.518513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.518526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.518793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.518809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.518956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.518971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.519199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.519215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.519368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.519383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.519537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.519552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.519774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.519788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.519985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.519999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.520162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.520184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.520399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.520414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.520664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.520679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.520774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.520788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.520959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.520975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.521120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.521135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.521228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.521244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.521332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.521346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.521494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.521508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.521590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.521603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.521832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.521847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.522057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.522072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.522226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.522242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.522502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.522527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.522684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.522707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.522903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.522926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.523190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.523207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.523423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.523438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.523660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.523675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.523843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.523857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.524087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.524103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.524258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.524273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.524463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.524479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.524632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.524646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.524847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.524879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.525013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.525028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.525186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.525205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.525377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.525392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.525550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.525565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.525795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.525810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.526031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.526046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.526266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.526281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.526423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.526438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.526660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.526676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.526762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.526776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.526924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.526939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.527115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.527130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.527277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.527293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.527443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.527458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.527600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.527615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.527850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.527865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.528118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.528133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.528304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.528328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.528506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.528520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.528617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.528631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.528762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.528777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.529023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.529038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.529207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.529222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.529449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.529463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.529600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.529614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.529853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.529868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.530074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.530089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.804 qpair failed and we were unable to recover it. 00:38:24.804 [2024-12-10 12:41:31.530255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.804 [2024-12-10 12:41:31.530272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.530459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.530494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.530750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.530775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.530864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.530887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.530996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.531012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.531244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.531259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.531409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.531423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.531505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.531518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.531660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.531675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.531828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.531842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.532060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.532076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.532221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.532237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.532405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.532437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.532683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.532699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.532855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.532872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.532962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.532975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.533212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.533228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.533470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.533486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.533719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.533734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.533903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.533918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.534003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.534017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.534085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.534098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.534323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.534339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:24.805 [2024-12-10 12:41:31.534567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.805 [2024-12-10 12:41:31.534583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:24.805 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.534839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.534855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.535025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.535042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.535271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.535287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.535391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.535405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.535566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.535581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.535729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.535745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.535837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.535850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.536029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.536044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.536215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.536231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.536486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.536500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.536732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.536747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.536923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.536937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.537006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.537019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.537218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.537233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.537379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.537395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.537617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.537633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.537728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.537742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.537922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.537950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.538199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.538224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.538336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.538358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.538600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.538622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.538814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.538840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.539031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.539055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.539225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.539243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.539475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.539490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.539719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.539735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.539883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.539898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.540031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.540046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.540205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.540221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.540394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.540410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.540568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.540586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.540683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.540696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.540867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.540882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.541126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.541141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.541298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.541314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.541466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.541481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.541714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.541730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.541980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.541996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.542256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.542273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.091 [2024-12-10 12:41:31.542373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.091 [2024-12-10 12:41:31.542395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.091 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.542600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.542615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.542760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.542775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.542927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.542942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.543111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.543126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.543330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.543347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.543428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.543442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.543590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.543605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.543761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.543776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.543924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.543939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.544107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.544122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.544284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.544300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.544437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.544452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.544677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.544691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.544849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.544864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.545058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.545074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.545191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.545206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.545362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.545377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.545563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.545588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.545783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.545807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.546023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.546050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.546272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.546290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.546514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.546529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.546709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.546724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.546896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.546911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.547139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.547154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.547299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.547314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.547520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.547534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.547749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.547764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.547969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.547984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.548250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.548265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.548420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.548438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.548537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.548552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.548641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.548655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.548797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.548812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.549015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.549029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.549110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.549124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.549330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.549345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.549560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.549575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.549716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.092 [2024-12-10 12:41:31.549731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.092 qpair failed and we were unable to recover it. 00:38:25.092 [2024-12-10 12:41:31.549887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.549903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.550105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.550119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.550269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.550284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.550510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.550525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.550681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.550696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.550923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.550938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.551035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.551050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.551271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.551286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.551446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.551464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.551634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.551650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.551787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.551801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.552049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.552063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.552287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.552302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.552456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.552471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.552623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.552638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.552795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.552810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.553029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.553043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.553260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.553275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.553462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.553486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.553607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.553630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.553888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.553911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.554162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.554189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.554412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.554435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.554596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.554618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.554872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.554889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.554977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.554990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.555240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.555255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.555403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.555418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.555619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.555634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.555778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.555793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.555952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.555967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.556117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.556132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.556319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.556335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.556570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.556589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.556832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.556847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.556996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.557011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.557217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.557232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.557331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.557345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.557494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.093 [2024-12-10 12:41:31.557509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.093 qpair failed and we were unable to recover it. 00:38:25.093 [2024-12-10 12:41:31.557679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.557693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.557845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.557860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.558035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.558049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.558222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.558238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.558392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.558407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.558679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.558693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.558849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.558864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.559011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.559026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.559200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.559216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.559417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.559432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.559621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.559637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.559807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.559822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.559978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.559993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.560196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.560212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.560424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.560439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.560591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.560605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.560849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.560864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.561089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.561104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.561274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.561290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.561433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.561451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.561547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.561561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.561706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.561721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.561857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.561872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.562094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.562109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.562193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.562207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.562359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.562374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.562523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.562538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.562699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.562715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.562914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.562928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.563172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.563187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.563456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.563471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.563641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.563656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.563825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.563841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.563935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.563949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.564111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.564126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.564357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.564372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.564532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.564547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.564680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.564695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.564842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.094 [2024-12-10 12:41:31.564857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.094 qpair failed and we were unable to recover it. 00:38:25.094 [2024-12-10 12:41:31.565111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.565126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.565204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.565218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.565369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.565384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.565553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.565568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.565651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.565665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.565809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.565825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.565970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.565985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.566196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.566211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.566358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.566374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.566447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.566460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.566613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.566627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.566775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.566790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.566871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.566885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.567096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.567111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.567367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.567383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.567539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.567554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.567785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.567800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.567950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.567965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.568138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.568154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.568340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.568360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.568584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.568602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.568846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.568860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.568944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.568958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.569039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.569053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.569239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.569254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.569476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.569490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.569624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.569638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.569886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.569901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.570103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.570119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.570203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.570218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.570363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.570377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.570626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.570641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.570867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.570882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.095 [2024-12-10 12:41:31.571034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.095 [2024-12-10 12:41:31.571049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.095 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.571197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.571211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.571441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.571456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.571663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.571678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.571952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.571967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.572135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.572150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.572267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.572285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.572511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.572525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.572679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.572694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.572923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.572938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.573081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.573096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.573239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.573255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.573480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.573495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.573636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.573661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.573895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.573910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.574056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.574071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.574202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.574218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.574418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.574433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.574575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.574590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.574719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.574734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.574870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.574886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.575040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.575055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.575131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.575144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.575305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.575321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.575533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.575548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.575772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.575788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.575937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.575952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.576096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.576116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.576364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.576379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.576487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.576502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.576719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.576733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.576983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.577001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.577141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.577156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.577383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.577399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.577548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.577563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.577765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.577780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.578005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.578020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.578247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.578262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.578345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.578358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.096 [2024-12-10 12:41:31.578559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.096 [2024-12-10 12:41:31.578573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.096 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.578712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.578727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.578971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.578987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.579182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.579197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.579341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.579356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.579600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.579615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.579746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.579761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.579966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.579981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.580229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.580244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.580456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.580471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.580628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.580644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.580880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.580900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.581047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.581062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.581280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.581295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.581432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.581446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.581593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.581608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.581825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.581840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.582085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.582100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.582342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.582357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.582582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.582597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.582667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.582680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.582841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.582856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.583013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.583028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.583189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.583204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.583349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.583364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.583574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.583589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.583723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.583738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.583833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.583848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.583947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.583964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.584124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.584139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.584373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.584388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.584601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.584617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.584777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.584792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.584993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.585008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.585169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.585184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.585330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.585345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.585477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.585492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.585701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.585716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.585873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.585888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.586028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.097 [2024-12-10 12:41:31.586042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.097 qpair failed and we were unable to recover it. 00:38:25.097 [2024-12-10 12:41:31.586188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.586203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.586418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.586433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.586663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.586678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.586753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.586767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.586973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.586988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.587133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.587149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.587374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.587390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.587471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.587485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.587690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.587706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.587906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.587921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.588052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.588067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.588302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.588318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.588518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.588534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.588678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.588693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.588777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.588790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.589015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.589030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.589279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.589294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.589452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.589467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.589667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.589682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.589906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.589921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.590124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.590138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.590276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.590292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.590439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.590454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.590667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.590681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.590884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.590899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.591129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.591144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.591308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.591324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.591478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.591493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.591587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.591604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.591828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.591842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.592013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.592028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.592228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.592244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.592353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.592367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.592591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.592606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.592748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.592764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.592979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.592999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.593137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.593152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.593304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.593319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.098 [2024-12-10 12:41:31.593533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.098 [2024-12-10 12:41:31.593548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.098 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.593630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.593644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.593864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.593879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.594078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.594093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.594181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.594195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.594373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.594388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.594616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.594631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.594776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.594791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.595016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.595031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.595163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.595182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.595285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.595300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.595552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.595567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.595719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.595735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.595893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.595908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.596042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.596056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.596231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.596247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.596449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.596464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.596731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.596745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.596888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.596903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.597145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.597160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.597323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.597338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.597538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.597553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.597700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.597715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.597861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.597876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.598080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.598094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.598229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.598245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.598345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.598360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.598571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.598586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.598721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.598736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.598938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.598953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.599181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.599199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.599343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.599357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.599492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.599507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.599660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.599675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.599856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.599870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.099 qpair failed and we were unable to recover it. 00:38:25.099 [2024-12-10 12:41:31.599958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.099 [2024-12-10 12:41:31.599971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.600130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.600145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.600297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.600312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.600487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.600503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.600646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.600661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.600858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.600872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.601076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.601091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.601198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.601212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.601380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.601396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.601546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.601561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.601807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.601821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.601978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.601993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.602144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.602159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.602409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.602425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.602649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.602664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.602842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.602857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.603067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.603082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.603173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.603187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.603334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.603349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.603552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.603567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.603770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.603784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.604004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.604019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.604105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.604119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.604256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.604271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.604495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.604510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.604618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.604632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.604800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.604820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.605060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.605074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.605320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.605335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.605559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.605574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.605719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.605733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.605949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.605963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.606188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.606207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.606378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.606393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.606615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.606630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.606836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.606853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.606967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.606981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.607188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.607204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.607408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.607423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.100 qpair failed and we were unable to recover it. 00:38:25.100 [2024-12-10 12:41:31.607568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.100 [2024-12-10 12:41:31.607583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.607748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.607764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.607964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.607980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.608192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.608207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.608431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.608446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.608691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.608706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.608878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.608893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.609122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.609138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.609291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.609306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.609534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.609549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.609748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.609763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.609984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.609998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.610200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.610216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.610375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.610390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.610553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.610568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.610680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.610695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.610859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.610873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.611048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.611062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.611272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.611287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.611514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.611530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.611679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.611694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.611915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.611929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.612090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.612105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.612289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.612316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.612493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.612522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.612714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.612739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.612924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.612940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.613100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.613116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.613299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.613314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.613478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.613493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.613650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.613665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.613752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.613766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.613993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.614008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.614160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.614181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.614369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.614384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.614544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.614559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.614802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.614820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.615014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.615029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.615292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.615307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.101 [2024-12-10 12:41:31.615463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.101 [2024-12-10 12:41:31.615478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.101 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.615719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.615734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.615907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.615928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.616150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.616165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.616394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.616409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.616625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.616640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.616717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.616730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.616884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.616898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.617121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.617136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.617345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.617360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.617505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.617520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.617655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.617670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.617879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.617894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.618036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.618050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.618301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.618318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.618541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.618560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.618705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.618720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.618870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.618884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.618978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.618992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.619201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.619217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.619359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.619374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.619574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.619588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.619766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.619780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.619981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.619996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.620297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.620322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.620608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.620635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.620865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.620890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.621130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.621147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.621319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.621335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.621484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.621498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.621674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.621689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.621940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.621956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.622212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.622230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.622416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.622429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.622648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.622664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.622818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.622833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.623066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.623080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.623282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.623300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.623576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.623592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.623819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.102 [2024-12-10 12:41:31.623834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.102 qpair failed and we were unable to recover it. 00:38:25.102 [2024-12-10 12:41:31.624002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.624016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.624179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.624195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.624367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.624381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.624534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.624549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.624768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.624783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.625010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.625025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.625216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.625232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.625549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.625565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.625760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.625776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.625952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.625967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.626102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.626117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.626217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.626231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.626384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.626398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.626599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.626614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.626705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.626718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.626883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.626898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.627097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.627112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.627331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.627347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.627576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.627590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.627849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.627864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.628087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.628103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.628326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.628341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.628630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.628646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.628851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.628866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.629097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.629123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.629324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.629352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.629550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.629574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.629831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.629848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.630085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.630100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.630204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.630218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.630458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.630473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.630642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.630657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.630810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.630824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.630936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.630949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.631148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.631163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.631396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.631412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.631556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.631571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.631713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.631730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.631961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.103 [2024-12-10 12:41:31.631976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.103 qpair failed and we were unable to recover it. 00:38:25.103 [2024-12-10 12:41:31.632122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.632137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.632340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.632357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.632492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.632507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.632735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.632751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.632903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.632920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.633082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.633099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.633325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.633347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.633495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.633511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.633591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.633620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.633839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.633854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.634081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.634097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.634297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.634313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.634484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.634499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.634654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.634668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.634818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.634833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.634983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.634998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.635152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.635171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.635315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.635330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.635494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.635510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.635712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.635727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.635939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.635954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.636177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.636192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.636412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.636426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.636651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.636666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.636837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.636852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.637100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.637127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.637309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.637336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.637527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.637552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.637835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.637853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.638011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.638026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.638231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.638247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.638411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.638427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.638662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.638677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.638826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.638841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.638987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.104 [2024-12-10 12:41:31.639003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.104 qpair failed and we were unable to recover it. 00:38:25.104 [2024-12-10 12:41:31.639153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.639173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.639383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.639399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.639625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.639640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.639795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.639812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.639951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.639967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.640122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.640137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.640376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.640392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.640527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.640541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.640752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.640767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.640916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.640931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.641023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.641037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.641295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.641312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.641470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.641485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.641712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.641727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.641972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.641988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.642207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.642223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.642375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.642391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.642520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.642535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.642791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.642807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.642897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.642911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.643055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.643070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.643176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.643191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.643365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.643380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.643549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.643565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.643736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.643751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.643988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.644004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.644207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.644223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.644443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.644458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.644660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.644677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.644837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.644852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.645085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.645113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.645295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.645322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.645549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.645573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.645761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.645778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.645964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.645979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.646073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.646087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.646227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.646242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.646337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.646351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.646588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.105 [2024-12-10 12:41:31.646603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.105 qpair failed and we were unable to recover it. 00:38:25.105 [2024-12-10 12:41:31.646835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.646850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.647039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.647055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.647283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.647303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.647461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.647476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.647724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.647742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.647968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.647983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.648210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.648225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.648431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.648446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.648669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.648684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.648856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.648871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.649028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.649043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.649297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.649313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.649468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.649483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.649642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.649657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.649908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.649923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.650073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.650088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.650267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.650283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.650452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.650468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.650641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.650656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.650887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.650902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.651140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.651155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.651401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.651416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.651596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.651611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.651758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.651773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.651854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.651867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.652080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.652094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.652249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.652264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.652441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.652456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.652608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.652623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.652695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.652708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.652866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.652881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.653133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.653162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.653353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.653379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.653478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.653502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.653675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.653692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.653944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.653959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.654098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.654112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.654260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.654276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.654415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.654430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.106 qpair failed and we were unable to recover it. 00:38:25.106 [2024-12-10 12:41:31.654509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.106 [2024-12-10 12:41:31.654523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.654605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.654618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.654841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.654856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.655010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.655025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.655122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.655136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.655293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.655308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.655535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.655550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.655761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.655777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.655854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.655867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.656030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.656045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.656228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.656245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.656324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.656337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.656495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.656510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.656713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.656729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.656812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.656826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.656975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.656990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.657070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.657083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.657234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.657250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.657343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.657357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.657515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.657530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.657679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.657694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.657879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.657895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.658040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.658054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.658278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.658294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.658381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.658395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.658545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.658560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.658694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.658710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.658913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.658928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.659125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.659140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.659353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.659369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.659597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.659617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.659830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.659846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.660085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.660102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.660249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.660264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.660355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.660368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.660514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.660531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.660797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.660812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.660912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.660928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.661065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.661080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.107 [2024-12-10 12:41:31.661309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.107 [2024-12-10 12:41:31.661325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.107 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.661557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.661572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.661777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.661792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.661962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.661977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.662197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.662213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.662416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.662431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.662624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.662639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.662952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.662967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.663115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.663129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.663372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.663388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.663584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.663599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.663827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.663842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.664075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.664090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.664357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.664373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.664509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.664524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.664752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.664768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.664947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.664962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.665191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.665207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.665380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.665395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.665608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.665623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.665871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.665886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.666032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.666048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.666231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.666247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.666344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.666357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.666489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.666504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.666661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.666676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.666777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.666793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.666934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.666949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.667152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.667174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.667389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.667404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.667493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.667507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.667664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.667679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.667826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.667843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.667987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.668006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.668235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.108 [2024-12-10 12:41:31.668251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.108 qpair failed and we were unable to recover it. 00:38:25.108 [2024-12-10 12:41:31.668407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.668423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.668652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.668667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.668858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.668873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.669080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.669095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.669344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.669360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.669537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.669553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.669628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.669642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.669858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.669873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.670031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.670046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.670188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.670204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.670308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.670323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.670587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.670602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.670705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.670720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.670869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.670884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.670980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.670994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.671225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.671243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.671489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.671504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.671657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.671672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.671925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.671940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.672064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.672080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.672307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.672327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.672475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.672490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.672690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.672705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.672932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.672947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.673163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.673191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.673321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.673336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.673412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.673425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.673690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.673704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.673846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.673861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.674085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.674100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.674267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.674282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.674504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.674519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.674674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.674689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.674818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.674833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.674977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.674992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.675160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.675181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.675358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.675373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.675595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.675609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.109 qpair failed and we were unable to recover it. 00:38:25.109 [2024-12-10 12:41:31.675752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.109 [2024-12-10 12:41:31.675769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.675989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.676004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.676152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.676171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.676255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.676269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.676408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.676423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.676624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.676639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.676731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.676745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.676884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.676899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.677102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.677116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.677269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.677284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.677500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.677515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.677694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.677711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.678025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.678040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.678188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.678203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.678428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.678444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.678623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.678637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.678843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.678862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.679013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.679028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.679287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.679303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.679449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.679464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.679557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.679570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.679743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.679758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.679987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.680001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.680248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.680262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.680495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.680511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.680665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.680680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.680756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.680770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.680949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.680964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.681116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.681131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.681321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.681337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.681428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.681442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.681587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.681601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.681822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.681837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.682039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.682054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.682288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.682305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.682450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.682465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.682667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.682682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.682861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.682876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.683042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.683058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.110 [2024-12-10 12:41:31.683156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.110 [2024-12-10 12:41:31.683175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.110 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.683396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.683430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.683645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.683660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.683848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.683864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.684066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.684081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.684226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.684242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.684473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.684489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.684652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.684673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.684899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.684914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.685068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.685082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.685319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.685335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.685471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.685486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.685629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.685645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.685802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.685817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.685898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.685911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.686049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.686062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.686208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.686224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.686419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.686435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.686589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.686604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.686683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.686698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.686925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.686939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.687025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.687039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.687261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.687276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.687419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.687434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.687613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.687628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.687791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.687806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.687941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.687955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.688200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.688216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.688389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.688405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.688560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.688575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.688742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.688757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.689013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.689028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.689230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.689245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.689378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.689392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.689631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.689647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.689857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.689873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.689971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.689984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.690155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.690185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.690304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.690320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.690585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.111 [2024-12-10 12:41:31.690599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.111 qpair failed and we were unable to recover it. 00:38:25.111 [2024-12-10 12:41:31.690691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.690704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.690930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.690949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.691084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.691099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.691283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.691299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.691447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.691463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.691600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.691614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.694459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.694474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.694636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.694650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.694753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.694767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.694974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.694990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.695080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.695094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.695280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.695295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.695452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.695467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.695557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.695571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.695726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.695741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.695911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.695926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.696128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.696143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.696431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.696447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.696600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.696615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.696774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.696789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.696871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.696885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.697130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.697145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.697305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.697321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.697423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.697437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.697585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.697601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.697749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.697764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.697939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.697954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.698103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.698117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.698275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.698291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.698471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.698486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.698713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.698728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.698888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.698908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.699148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.699163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.699325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.699341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.699563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.699578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.699735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.699750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.699846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.699859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.700085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.700100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.700189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.112 [2024-12-10 12:41:31.700204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.112 qpair failed and we were unable to recover it. 00:38:25.112 [2024-12-10 12:41:31.700353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.700368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.700508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.700523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.700628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.700645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.700803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.700818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.700982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.700997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.701178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.701195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.701365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.701380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.701464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.701477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.701627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.701642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.701795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.701810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.701992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.702007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.702142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.702157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.702257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.702272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.702463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.702479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.702692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.702707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.702794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.702808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.702961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.702976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.703121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.703136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.703346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.703361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.703505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.703520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.703619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.703633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.703776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.703790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.703941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.703956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.704156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.704177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.704327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.704342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.704565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.704580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.704721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.704736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.704915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.704931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.705191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.705207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.705358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.705373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.705523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.705538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.705677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.705692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.705881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.705896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.706113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.706128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.706261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.113 [2024-12-10 12:41:31.706278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.113 qpair failed and we were unable to recover it. 00:38:25.113 [2024-12-10 12:41:31.706506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.706527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.706680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.706696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.706904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.706920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.707119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.707135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.707351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.707366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.707595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.707610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.707798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.707813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.708099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.708117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.708210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.708225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.708373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.708389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.708543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.708558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.708652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.708666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.708885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.708901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.709110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.709126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.709277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.709293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.709507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.709523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.709682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.709697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.709795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.709809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.709946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.709961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.710103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.710119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.710206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.710220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.710333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.710353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.710495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.710511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.710601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.710615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.710761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.710775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.710921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.710936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.711082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.711097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.711177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.711190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.711394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.711410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.711496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.711509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.711657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.711673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.711865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.711880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.712106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.712121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.712204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.712219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.712487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.712567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.712681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.712714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.712914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.712959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.713209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.713227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.713381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.114 [2024-12-10 12:41:31.713397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.114 qpair failed and we were unable to recover it. 00:38:25.114 [2024-12-10 12:41:31.713552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.713567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.713678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.713693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.713873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.713887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.714089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.714104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.714202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.714218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.714384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.714400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.714542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.714597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.714746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.714789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.714999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.715051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.715316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.715360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.715508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.715550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.715800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.715848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.716043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.716086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.716297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.716341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.716600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.716642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.716947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.716990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.717297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.717340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.717480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.717520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.717681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.717723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.718022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.718038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.718305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.718321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.718483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.718498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.718611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.718655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.718817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.718859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.719073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.719116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.719391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.719437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.719675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.719718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.719951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.719969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.720195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.720211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.720459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.720502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.720655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.720697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.721005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.721046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.721312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.721358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.721581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.721636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.721869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.721914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.722251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.722309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.722542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.722589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.722881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.722927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.115 [2024-12-10 12:41:31.723208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.115 [2024-12-10 12:41:31.723254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.115 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.723459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.723503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.723706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.723751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.723903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.723926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.724191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.724237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.724498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.724541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.724860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.724876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.725026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.725041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.725219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.725235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.725325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.725365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.725556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.725604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.725775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.725818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.726084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.726111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.726233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.726258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.726368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.726392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.726557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.726579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.726703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.726726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.726886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.726909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.727182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.727227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.727386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.727429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.727683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.727699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.727814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.727856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.728132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.728191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.728458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.728502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.728642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.728686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.728993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.729037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.729301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.729346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.729625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.729669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.729947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.729992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.730178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.730213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.730325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.730347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.730545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.730592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.730833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.730876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.731141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.731197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.731411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.731455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.731668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.731742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.731899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.731943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.732152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.732209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.732421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.732464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.732654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.116 [2024-12-10 12:41:31.732676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.116 qpair failed and we were unable to recover it. 00:38:25.116 [2024-12-10 12:41:31.732950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.732993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.733291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.733336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.733504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.733547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.733841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.733895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.734060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.734083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.734297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.734341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.734486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.734529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.734723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.734766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.734905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.734928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.735121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.735138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.735316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.735334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.735440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.735456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.735623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.735666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.735868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.735911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.736120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.736163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.736326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.736371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.736581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.736623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.736787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.736830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.737030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.737073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.737366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.737411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.737577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.737619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.737831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.737874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.738104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.738119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.738273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.738289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.738392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.738408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.738565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.738580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.738681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.738696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.738876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.738921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.739047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.739090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.739390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.739435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.739576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.739620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.739895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.739950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.740053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.740068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.740291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.740335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.117 qpair failed and we were unable to recover it. 00:38:25.117 [2024-12-10 12:41:31.740555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.117 [2024-12-10 12:41:31.740598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.740853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.740868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.741123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.741165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.741395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.741439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.741659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.741702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.741928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.741943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.742115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.742158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.742434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.742479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.742745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.742788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.743008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.743051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.743296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.743342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.743565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.743608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.743883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.743923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.744054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.744068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.744248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.744291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.744544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.744589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.744795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.744836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.744975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.744990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.745142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.745157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.745264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.745280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.745525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.745540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.745787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.745830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.746046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.746089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.746297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.746341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.746506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.746550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.746774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.746817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.747018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.747062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.747267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.747312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.747477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.747520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.747806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.747849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.748153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.748209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.748434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.748477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.748753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.748796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.749073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.749117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.749344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.749388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.749650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.749693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.749991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.750008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.750215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.750234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.750385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.750400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.118 [2024-12-10 12:41:31.750599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.118 [2024-12-10 12:41:31.750614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.118 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.750772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.750786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.750963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.750978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.751146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.751162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.751267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.751280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.751389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.751404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.751543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.751557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.751716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.751731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.751887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.751929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.752186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.752231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.752447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.752489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.752647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.752688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.752889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.752933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.753157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.753215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.753453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.753497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.753816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.753859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.754174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.754189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.754349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.754367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.754537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.754553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.754653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.754666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.754829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.754872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.755068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.755111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.755275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.755320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.755601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.755654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.755922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.755978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.756194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.756239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.756501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.756546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.756817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.756860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.757079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.757123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.757351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.757396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.757613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.757655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.757889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.757931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.758199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.758244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.758406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.758448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.758613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.758655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.758880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.758895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.759142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.759195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.759400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.759442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.759657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.759701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.119 [2024-12-10 12:41:31.759911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.119 [2024-12-10 12:41:31.759927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.119 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.760097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.760140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.760366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.760408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.760613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.760657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.760961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.761005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.761235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.761324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.761595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.761645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.761929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.761954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.762247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.762304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.762469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.762514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.762675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.762718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.762945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.762990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.763296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.763344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.763572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.763616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.763963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.764008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.764218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.764278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.764497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.764542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.764855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.764899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.765209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.765255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.765574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.765619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.765834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.765879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.766079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.766124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.766339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.766363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.766527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.766550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.766727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.766773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.767077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.767121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.767371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.767417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.767723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.767767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.768072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.768117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.768339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.768384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.768661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.768684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.768851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.768874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.769065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.769083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.769186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.769201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.769346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.769361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.769511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.769526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.769709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.769725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.769947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.120 [2024-12-10 12:41:31.769962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.120 qpair failed and we were unable to recover it. 00:38:25.120 [2024-12-10 12:41:31.770213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.770229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.770428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.770444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.770617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.770633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.770782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.770797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.771001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.771016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.771241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.771257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.771471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.771487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.771656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.771675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.771856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.771872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.772050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.772065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.772213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.772229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.772373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.772388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.772469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.772488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.772588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.772602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.772742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.772757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.772917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.772932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.773094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.773109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.773202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.773216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.773334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.773348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.773608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.773623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.773826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.773841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.774024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.774040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.774135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.774149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.774366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.774381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.774573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.774588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.774810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.774825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.774911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.774924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.775128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.775144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.775328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.775343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.775479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.775494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.775645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.775661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.775866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.775881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.776032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.776048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.776211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.776227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.776364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.776379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.776466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.776480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.121 [2024-12-10 12:41:31.776616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.121 [2024-12-10 12:41:31.776631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.121 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.776927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.776943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.777027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.777040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.777260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.777276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.777440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.777455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.777657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.777672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.777828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.777844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.777983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.777997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.778176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.778191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.778329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.778344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.778426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.778440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.778535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.778552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.778710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.778726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.778887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.778903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.779042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.779056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.779253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.779269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.779489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.779504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.779656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.779671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.779841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.779859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.779951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.779965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.780107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.780122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.780272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.780287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.780516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.780531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.780760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.780775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.780852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.780866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.781023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.781038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.781181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.781197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.781347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.781363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.781519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.781535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.781670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.781685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.781934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.781949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.782031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.782045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.782208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.782225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.782314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.782328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.782585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.782600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.782749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.782765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.783019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.122 [2024-12-10 12:41:31.783033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.122 qpair failed and we were unable to recover it. 00:38:25.122 [2024-12-10 12:41:31.783207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.783223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.783330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.783344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.783567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.783582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.783734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.783754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.783919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.783934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.784086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.784100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.784254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.784270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.784421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.784436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.784570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.784584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.784664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.784677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.784870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.784885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.785111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.785126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.785281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.785296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.785401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.785416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.785660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.785677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.785859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.785874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.785955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.785969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.786111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.786126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.786291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.786306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.786469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.786484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.786600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.786615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.786793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.786809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.787045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.787060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.787144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.787158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.787367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.787382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.787479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.787494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.787595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.787610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.787790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.787805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.787903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.787918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.788959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.788974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.789110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.789125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.123 qpair failed and we were unable to recover it. 00:38:25.123 [2024-12-10 12:41:31.789218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.123 [2024-12-10 12:41:31.789233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.789315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.789328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.789466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.789480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.789566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.789580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.789826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.789841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.790073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.790089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.790247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.790264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.790488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.790503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.790603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.790618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.790703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.790717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.790819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.790835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.790973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.790988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.791142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.791157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.791427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.791443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.791537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.791551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.791637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.791655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.791819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.791834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.792053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.792069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.792226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.792242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.792424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.792439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.792641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.792656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.792724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.792738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.792993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.793008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.793228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.793244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.793403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.793419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.793579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.793594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.793755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.793770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.793989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.794004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.794090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.794110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.794325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.794341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.794477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.794492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.794624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.794638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.794716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.794730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.794890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.794904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.794994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.795252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.795268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.795471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.795485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.795711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.795726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.795941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.795955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.124 qpair failed and we were unable to recover it. 00:38:25.124 [2024-12-10 12:41:31.796111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.124 [2024-12-10 12:41:31.796126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.796342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.796357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.796507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.796522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.796632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.796648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.796732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.796745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.797034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.797049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.797300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.797316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.797418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.797433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.797638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.797653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.797742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.797758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.797936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.797950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.798156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.798176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.798349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.798364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.798442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.798455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.798630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.798645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.798894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.798908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.799106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.799124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.799283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.799298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.799463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.799477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.799624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.799640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.799740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.799755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.800010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.800024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.800270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.800286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.800440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.800455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.800617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.800632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.800837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.800852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.800993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.801008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.801231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.801246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.801348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.801362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.801522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.801536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.801695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.801710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.801926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.801944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.802195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.802211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.802367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.802383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.802588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.802603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.802762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.802777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.802873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.802888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.803019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.803034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.803105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.803118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.125 qpair failed and we were unable to recover it. 00:38:25.125 [2024-12-10 12:41:31.803292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.125 [2024-12-10 12:41:31.803308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.803411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.803426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.803571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.803586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.803685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.803700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.803924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.803940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.804110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.804125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.804335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.804350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.804498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.804512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.804620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.804635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.804786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.804801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.804959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.804974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.805209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.805225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.805328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.805344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.805601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.805616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.805708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.805732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.805984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.805999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.806147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.806161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.806328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.806346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.806490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.806505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.806667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.806682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.806904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.806919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.807016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.807175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.807191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.807261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.807275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.807368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.807382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.807525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.807540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.807641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.807656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.807802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.807818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.808049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.808064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.808234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.808249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.808342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.808356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.808495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.808509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.808698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.808713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.808935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.126 [2024-12-10 12:41:31.808950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.126 qpair failed and we were unable to recover it. 00:38:25.126 [2024-12-10 12:41:31.809135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.809150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.809258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.809274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.809435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.809450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.809554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.809569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.809721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.809736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.809905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.809919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.810080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.810095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.810287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.810303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.810480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.810494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.810632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.810647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.810801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.810816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.811051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.811066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.811260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.811276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.811345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.811359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.811506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.811522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.811656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.811671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.811934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.811950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.812193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.812208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.812351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.812366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.812500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.812516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.812595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.812609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.812833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.812848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.813078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.813093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.813288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.813306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.813406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.813421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.813518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.813533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.813742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.813757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.813834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.814060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.814075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.814162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.814183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.814365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.814384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.814456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.814471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.814557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.814572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.814823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.814838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.814930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.814945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.815095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.815109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.815338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.815353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.815541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.815556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.815702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.815718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.127 [2024-12-10 12:41:31.815880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.127 [2024-12-10 12:41:31.815895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.127 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.816042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.816057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.816208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.816223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.816364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.816379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.816485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.816500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.816596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.816616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.816827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.816843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.817066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.817081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.817241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.817257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.817398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.817413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.817639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.817655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.817825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.817841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.818075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.818091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.818178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.818193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.818295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.818309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.818481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.818496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.818641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.818656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.818866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.818881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.819044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.819058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.819217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.819233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.819487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.819502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.819733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.819749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.819992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.820007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.820252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.820268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.820414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.820431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.820584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.820599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.820704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.820719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.820923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.820970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.821172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.821205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.821410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.821456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.821572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.821588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.821810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.821825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.821915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.821930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.822016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.822030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.822178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.822192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.822271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.822286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.822440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.822455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.822533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.822548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.822712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.822727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.128 [2024-12-10 12:41:31.822892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.128 [2024-12-10 12:41:31.822907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.128 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.823113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.823128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.823420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.823437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.823572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.823586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.823720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.823734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.823926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.823941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.824080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.824095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.824262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.824278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.824413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.824428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.824573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.824587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.824739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.824753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.825000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.825015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.825183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.825212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.825403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.825429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.825557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.825579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.825807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.825829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.825942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.825963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.826147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.826176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.826337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.826366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.826531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.826553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.826723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.826744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.826840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.826858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.827003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.827018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.827187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.827203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.827272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.827287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.827374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.827391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.827479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.827494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.827647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.827662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.827878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.827893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.828057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.828072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.828163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.828184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.828388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.828404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.828544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.828559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.828694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.828710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.828969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.828984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.829186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.829201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.829403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.829419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.829503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.829518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.829610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.829625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.129 [2024-12-10 12:41:31.829857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.129 [2024-12-10 12:41:31.829872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.129 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.830025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.830040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.830287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.830303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.830401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.830423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.830580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.830596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.830682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.830697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.830944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.830959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.831114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.831129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.831278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.831294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.831450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.831465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.831618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.831634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.831777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.831792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.831983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.831998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.832141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.832156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.832320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.832335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.832438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.832453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.832529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.832544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.832698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.832712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.832931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.832946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.833092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.833108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.833338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.833354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.833620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.833636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.833799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.833814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.834046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.834061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.834208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.834224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.834397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.834412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.834494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.834512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.834662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.834677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.834984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.834999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.835238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.835253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.835455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.835470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.835605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.835620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.835888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.835931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.836147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.836200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.130 qpair failed and we were unable to recover it. 00:38:25.130 [2024-12-10 12:41:31.836456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.130 [2024-12-10 12:41:31.836472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.836647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.836691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.836904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.836947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.837267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.837311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.837578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.837620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.837919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.837962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.838258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.838303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.838565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.838608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.838846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.838889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.839121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.839164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.839382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.839426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.839569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.839612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.839779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.839821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.840114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.840156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.840390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.840434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.840597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.840640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.840863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.840906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.841150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.841171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.841385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.841400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.841560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.841575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.841811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.841826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.842053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.842067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.842316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.842348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.842568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.842611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.842770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.842813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.843091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.843133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.843362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.843407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.843670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.843714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.843845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.843888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.844183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.844228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.844364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.844379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.844476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.844491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.844596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.844615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.844753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.844782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.844873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.844889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.845099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.845114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.845298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.845313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.845529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.845571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.845799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.845842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.131 qpair failed and we were unable to recover it. 00:38:25.131 [2024-12-10 12:41:31.846015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.131 [2024-12-10 12:41:31.846031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.846258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.846303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.846514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.846556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.846759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.846802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.847067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.847110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.847260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.847275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.847429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.847444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.847613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.847629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.847745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.847788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.847938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.847981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.848271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.848286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.848429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.848445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.848606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.848621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.848847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.848890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.849117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.849161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.849415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.849458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.849652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.849694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.849899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.849941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.850193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.850208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.850331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.850357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.850536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.850579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.850806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.850848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.850985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.851028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.851280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.851296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.851459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.851474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.851617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.851660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.851897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.851939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.852144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.852196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.852413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.852456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.852715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.852759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.852956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.852998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.853210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.853225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.853397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.853441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.853656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.853705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.853939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.853982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.854194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.854239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.854417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.854461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.854674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.854717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.854992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.132 [2024-12-10 12:41:31.855035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.132 qpair failed and we were unable to recover it. 00:38:25.132 [2024-12-10 12:41:31.855237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.855281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.855477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.855492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.855580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.855595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.855694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.855709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.855844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.855886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.856100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.856142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.856387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.856430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.856652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.856696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.857033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.857077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.857252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.857268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.857367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.857410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.857642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.857685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.857988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.858031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.858276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.858337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.858445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.858460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.858628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.858670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.858896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.858939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.859239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.859255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.859412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.859427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.859604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.859624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.859741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.859786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.859989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.860037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.860315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.860368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.860634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.860681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.860973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.861029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.861287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.861311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.861581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.861635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.861886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.861931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.862221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.862276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.862526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.862549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.862735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.862758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.862934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.862957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.863177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.863200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.863375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.863418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.863600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.863645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.863801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.863845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.863984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.864025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.864235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.864280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.864509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.864553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.864794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.864838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.864984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.865027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.865307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.865331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.133 qpair failed and we were unable to recover it. 00:38:25.133 [2024-12-10 12:41:31.865444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.133 [2024-12-10 12:41:31.865467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.865570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.865592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.865687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.865710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.865818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.865836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.866092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.866107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.866291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.866306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.866476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.866520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.866802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.866845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.867119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.867133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.867272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.867287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.867468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.867510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.867673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.867716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.867933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.867987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.868218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.868234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.868336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.868351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.868524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.868567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.868880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.868922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.869139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.869154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.869316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.869331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.869548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.869597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.869892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.869936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.870206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.870222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.870299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.870314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.870420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.870435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.870584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.870599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.870848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.870892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.871122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.871178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.871342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.871385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.871610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.871653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.871884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.871927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.872132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.872186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.872335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.872377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.872563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.872579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.872746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.872789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.873077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.873119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.873432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.873448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.873597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.873613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.873866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.873907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.874219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.874264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.874453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.874468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.874680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.874723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.874897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.874940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.134 [2024-12-10 12:41:31.875193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.134 [2024-12-10 12:41:31.875227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.134 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.875383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.875399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.875627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.875642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.875722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.875737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.875910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.875925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.876031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.876047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.876130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.876146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.876316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.876332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.876483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.876527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.876794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.876837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.876963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.877006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.877211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.877227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.877343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.877386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.877539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.877582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.877739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.877783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.877994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.878037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.878241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.878287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.878443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.878491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.878737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.878780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.879044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.879087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.879322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.879367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.879578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.879593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.879763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.879808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.880021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.880075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.880292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.880337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.880535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.135 [2024-12-10 12:41:31.880551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.135 qpair failed and we were unable to recover it. 00:38:25.135 [2024-12-10 12:41:31.880642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.880658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.880894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.880908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.881057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.881072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.881225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.881240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.881377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.881392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.881478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.881494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.881633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.881648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.881748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.881763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.881848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.881863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.882095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.882139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.882421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.882465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.882629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.882672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.882992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.883035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.883264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.883309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.883524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.883567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.883809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.883853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.884000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.884044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.884280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.884326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.884538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.884554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.884667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.884683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.884770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.884786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.884923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.884938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.885141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.885202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.885349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.885393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.885553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.885596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.885822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.885866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.886161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.886215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.886433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.886449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.886663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.886678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.886880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.886895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.887105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.887148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.887437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.887489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.887743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.887786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.887993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.888037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.888355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.888371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.888527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.888542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.136 [2024-12-10 12:41:31.888770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.136 [2024-12-10 12:41:31.888785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.136 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.888894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.888910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.889057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.889073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.889306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.889322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.889473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.889488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.889667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.889683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.889925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.889939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.890186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.890202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.890299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.890314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.890458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.890472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.890632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.890646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.890811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.890826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.890974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.890988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.891122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.891138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.891242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.891259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.891334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.891349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.891581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.891596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.891747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.891762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.892024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.892038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.892246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.892263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.892430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.892446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.892585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.892600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.892761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.892777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.892953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.892969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.893248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.422 [2024-12-10 12:41:31.893265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.422 qpair failed and we were unable to recover it. 00:38:25.422 [2024-12-10 12:41:31.893345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.893366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.893457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.893472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.893584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.893600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.893702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.893718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.893864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.893879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.893968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.893983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.894210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.894226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.894397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.894412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.894505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.894521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.894765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.894781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.894991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.895009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.895182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.895227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.895422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.895465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.895679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.895723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.895924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.895968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.896177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.896222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.896458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.896473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.896617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.896632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.896805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.896820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.897041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.897082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.897313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.897360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.897598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.897642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.897934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.897978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.898241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.898288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.898562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.898606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.898818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.898861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.899128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.899199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.899364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.899380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.899565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.899608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.899828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.899870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.900149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.900207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.900405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.900449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.900591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.900634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.900849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.900892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.901076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.901091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.901302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.901347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.901542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.901586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.901738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.901781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.902018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.902061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.902328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.902374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.902529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.902572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.902868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.423 [2024-12-10 12:41:31.902911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.423 qpair failed and we were unable to recover it. 00:38:25.423 [2024-12-10 12:41:31.903189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.903234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.903430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.903474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.903620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.903664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.903966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.904010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.904239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.904255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.904402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.904416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.904679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.904695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.904866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.904882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.905036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.905086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.905322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.905367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.905630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.905675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.906000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.906043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.906346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.906391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.906559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.906603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.906819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.906862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.907121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.907135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.907432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.907477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.907646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.907689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.907932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.907974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.908255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.908300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.908522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.908565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.908894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.908938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.909214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.909244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.909333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.909348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.909517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.909560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.909723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.909766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.909997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.910041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.910284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.910300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.910456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.910471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.910631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.910647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.910913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.910956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.911265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.911280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.911420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.911436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.911597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.911640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.911920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.911963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.912250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.912265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.912370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.912414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.912679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.912723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.912981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.913024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.913311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.913326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.913477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.913492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.913706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.424 [2024-12-10 12:41:31.913749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.424 qpair failed and we were unable to recover it. 00:38:25.424 [2024-12-10 12:41:31.913980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.914025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.914238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.914253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.914359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.914374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.914482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.914497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.914731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.914775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.915067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.915110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.915331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.915384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.915608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.915651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.915922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.915966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.916272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.916287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.916437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.916453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.916658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.916673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.916870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.916913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.917148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.917204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.917418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.917461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.917622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.917665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.917992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.918036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.918298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.918319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.918411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.918426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.918523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.918538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.918624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.918639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.918814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.918856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.919075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.919121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.919330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.919375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.919584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.919599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.919827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.919870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.920139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.920198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.920462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.920500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.920674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.920689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.920794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.920808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.920996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.921011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.921229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.921275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.921561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.921603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.921761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.921804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.922039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.922081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.922342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.922358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.922531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.922573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.922912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.922956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.923298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.923343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.923547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.923563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.923726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.923770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.924036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.924080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.425 qpair failed and we were unable to recover it. 00:38:25.425 [2024-12-10 12:41:31.924291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.425 [2024-12-10 12:41:31.924345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.924505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.924520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.924688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.924722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.924974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.925019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.925235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.925286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.925439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.925455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.925709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.925753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.926031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.926072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.926257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.926272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.926380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.926395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.926552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.926567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.926679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.926694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.926854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.926897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.927095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.927138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.927377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.927422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.927590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.927605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.927761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.927777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.927944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.927975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.928214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.928259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.928529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.928572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.928865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.928906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.929183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.929229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.929475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.929520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.929865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.929909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.930192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.930238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.930383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.930426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.930720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.930763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.930991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.931034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.931247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.931263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.931447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.931462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.931617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.931660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.931983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.932073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.932333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.932392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.932575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.932599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.932788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.932812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.426 qpair failed and we were unable to recover it. 00:38:25.426 [2024-12-10 12:41:31.933050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.426 [2024-12-10 12:41:31.933094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.933316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.933362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.933590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.933634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.933911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.933955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.934160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.934217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.934426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.934471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.934714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.934758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.935025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.935081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.935332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.935379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.935590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.935619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.935835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.935858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.936023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.936046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.936220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.936245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.936411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.936455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.936655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.936699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.936923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.936967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.937190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.937236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.937453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.937498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.937762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.937805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.938028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.938073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.938376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.938422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.938656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.938679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.938914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.938939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.939040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.939059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.939150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.939172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.939347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.939363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.939576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.939618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.939766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.939810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.940088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.940132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.940364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.940407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.940574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.940618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.940843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.940888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.941198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.941243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.941456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.941470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.941708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.941750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.941966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.942009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.942336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.942381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.942530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.942575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.942728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.942771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.942984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.943028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.943302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.943348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.943497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.943511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.427 [2024-12-10 12:41:31.943623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.427 [2024-12-10 12:41:31.943638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.427 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.943741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.943756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.943975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.944017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.944231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.944276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.944499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.944543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.944837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.944879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.945195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.945241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.945500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.945550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.945754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.945797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.946032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.946075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.946263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.946279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.946418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.946461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.946619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.946662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.946902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.946945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.947207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.947252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.947436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.947485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.947670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.947690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.947824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.947840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.947985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.948001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.948157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.948178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.948343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.948387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.948620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.948664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.948891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.948935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.949229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.949273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.949520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.949563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.949867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.949910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.950139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.950191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.950456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.950500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.950666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.950710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.951001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.951044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.951342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.951387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.951544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.951587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.951818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.951862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.952106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.952150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.952334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.952349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.952567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.952610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.952919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.952963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.953109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.953125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.953357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.953373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.953525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.953541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.953698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.953713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.953813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.953828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.954061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.428 [2024-12-10 12:41:31.954104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.428 qpair failed and we were unable to recover it. 00:38:25.428 [2024-12-10 12:41:31.954360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.954405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.954509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.954524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.954722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.954765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.955089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.955132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.955379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.955431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.955669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.955713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.956007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.956049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.956303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.956359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.956523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.956539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.956685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.956727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.957033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.957077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.957357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.957373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.957570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.957613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.957905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.957957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.958199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.958245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.958473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.958518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.958826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.958869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.959126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.959180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.959359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.959404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.959578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.959622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.959857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.959900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.960192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.960237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.960452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.960496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.960735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.960915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.960959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.961206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.961252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.961411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.961427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.961603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.961619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.961712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.961727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.961895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.961910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.962152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.962218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.962509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.962555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.962687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.962702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.962879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.962921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.963139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.963195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.963398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.963442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.963645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.963662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.963884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.963905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.964051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.964066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.964241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.964256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.964487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.964503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.964642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.964657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.429 [2024-12-10 12:41:31.964977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.429 [2024-12-10 12:41:31.964993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.429 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.965239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.965285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.965457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.965508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.965651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.965694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.966039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.966083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.966240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.966285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.966496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.966512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.966671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.966713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.966913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.966958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.967292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.967338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.967547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.967591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.969216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.969252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.969496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.969514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.969681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.969698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.969960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.969977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.970277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.970298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.970476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.970493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.970582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.970598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.970760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.970776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.970885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.970901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.971155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.971180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.971351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.971367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.971545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.971560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.971739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.971755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.971851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.971867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.972007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.972023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.972192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.972208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.972358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.972373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.972470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.972486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.972599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.972615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.972820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.972836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.973018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.973033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.973189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.973206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.973437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.973454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.973613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.973630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.973732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.973748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.973894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.973911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.974064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.974079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.430 qpair failed and we were unable to recover it. 00:38:25.430 [2024-12-10 12:41:31.974221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.430 [2024-12-10 12:41:31.974238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.974338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.974355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.974505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.974521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.974720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.974737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.974990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.975010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.975244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.975261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.975405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.975421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.975508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.975523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.975626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.975642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.975727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.975742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.975925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.975942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.976120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.976136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.976229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.976247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.976356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.976372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.976525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.976540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.976626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.976642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.976854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.976869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.977076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.977092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.977245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.977261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.977363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.977380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.977478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.977494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.977641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.977662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.977838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.977854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.978097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.978113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.978295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.978312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.978464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.978480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.978567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.978583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.978679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.978695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.978931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.978947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.979092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.979108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.979248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.979265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.979419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.979436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.979554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.979570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.979675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.979691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.979882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.979898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.980071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.980088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.980230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.980246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.980456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.980472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.980581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.980597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.980737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.980753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.980915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.980931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.981214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.981231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.981331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.431 [2024-12-10 12:41:31.981347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.431 qpair failed and we were unable to recover it. 00:38:25.431 [2024-12-10 12:41:31.981429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.981446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.981542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.981561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.981808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.981978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.981994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.982148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.982164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.982338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.982355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.982441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.982457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.982615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.982631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.982787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.982803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.982984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.983001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.983147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.983163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.983286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.983302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.983386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.983402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.983565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.983581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.983733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.983748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.983837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.983854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.984006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.984022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.984216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.984233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.984390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.984427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.984574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.984591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.984746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.984762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.985012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.985029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.985178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.985195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.985283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.985316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.985472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.985489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.985574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.985590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.985772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.985788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.985884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.985900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.985981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.985999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.986139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.986155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.986430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.986447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.986691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.986706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.986793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.986808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.986968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.986984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.987134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.987150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.987363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.987380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.987530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.987546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.987695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.987710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.987814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.987829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.987973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.987988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.988125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.988141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.988399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.988416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.432 qpair failed and we were unable to recover it. 00:38:25.432 [2024-12-10 12:41:31.988595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.432 [2024-12-10 12:41:31.988616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.988768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.988784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.988998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.989014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.989179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.989195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.989423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.989440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.989599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.989614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.989698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.989713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.989817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.989833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.990061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.990076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.990234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.990251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.990398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.990415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.990556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.990572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.990799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.990817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.991060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.991077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.991257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.991275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.991506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.991521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.991709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.991727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.991978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.991993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.992149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.992165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.992330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.992346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.992520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.992536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.992682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.992698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.992863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.992879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.992956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.992973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.993109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.993125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.993292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.993309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.993454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.993473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.993709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.993725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.993891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.993907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.994115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.994131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.994351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.994369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.994610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.994626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.994836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.994852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.995067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.995084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.995238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.995256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.995431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.995447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.995611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.995628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.995765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.995781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.996019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.996035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.996192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.996208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.996322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.996338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.996438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.996455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.996661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-12-10 12:41:31.996678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.433 qpair failed and we were unable to recover it. 00:38:25.433 [2024-12-10 12:41:31.996820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.996836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.997089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.997104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.997257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.997274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.997433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.997449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.997593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.997609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.997855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.997871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.998105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.998122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.998364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.998381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.998634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.998651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.998758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.998775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.998858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.998874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.999012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.999028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.999186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.999203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.999353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.999370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.999527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.999543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.999702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.999718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.999899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:31.999916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:31.999989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.000005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.000102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.000119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.000258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.000275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.000497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.000514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.000610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.000631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.000728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.000745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.000851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.000870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.000956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.000973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.001063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.001080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.001186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.001204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.001360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.001377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.001534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.001550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.001635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.001651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.001724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.001739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.001843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.001860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.002006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.002023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.002116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.002131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.002308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.002326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.002464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.002481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.002637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.002654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.434 [2024-12-10 12:41:32.002812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-12-10 12:41:32.002829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.434 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.002937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.002954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.003030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.003046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.003140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.003157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.003320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.003337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.003482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.003498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.003581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.003598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.003682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.003698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.003772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.003789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.003935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.003951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.004939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.004956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.005048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.005064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.005155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.005179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.005389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.005423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.005516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.005532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.005678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.005697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.005837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.005854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.005947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.005964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.006944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.006960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.007049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.007065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.007151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.007176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.007248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.007264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.007449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.007466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.007559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.007575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.007678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.007694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.007768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.007784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.435 qpair failed and we were unable to recover it. 00:38:25.435 [2024-12-10 12:41:32.007859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-12-10 12:41:32.007875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.007952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.007968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.008038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.008056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.008134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.008151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.008294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.008342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.008560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.008609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.008747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.008795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.008899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.008917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.009979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.009996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.010147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.010164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.010310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.010326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.010390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.010407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.010498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.010515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.010579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.010596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.010734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.010751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.010909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.010930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.010997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.011014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.011152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.011174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.011248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.011265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.011427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.011444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.011541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.011558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.011715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.011731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.011870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.011886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.012063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.012080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.012177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.012195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.012270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.012288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.012361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.012380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.012514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.012532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.012611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.012627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.012871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.012888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.013073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.013090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.013243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.013262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.013422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.013440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.436 [2024-12-10 12:41:32.013599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.436 [2024-12-10 12:41:32.013617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.436 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.013714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.013731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.013821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.013839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.014067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.014085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.014232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.014249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.014349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.014366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.014516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.014532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.014635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.014651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.014738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.014755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.014929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.014945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.015960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.015976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.016083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.016100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.016266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.016283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.016433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.016449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.016550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.016568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.016723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.016739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.016834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.016850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.016935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.016951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.017084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.017100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.017305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.017322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.017418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.017435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.017512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.017529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.017709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.017726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.017814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.017835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.017920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.017934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.018017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.018031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.018195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.018211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.018354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.018371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.018454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.018470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.018565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.018581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.018762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.018778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.018848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.018864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.019008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.019025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.019192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.019208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.019301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.019318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.437 qpair failed and we were unable to recover it. 00:38:25.437 [2024-12-10 12:41:32.019460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.437 [2024-12-10 12:41:32.019476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.019560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.019576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.019721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.019737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.019851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.019868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.020963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.020979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.021068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.021085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.021152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.021178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.021340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.021357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.021434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.021451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.021544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.021561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.021627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.021642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.021784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.021803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.021979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.021996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.022144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.022161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.022278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.022295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.022453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.022469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.022568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.022584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.022793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.022810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.022946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.022962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.023026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.023048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.023117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.023134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.023228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.023245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.023353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.023368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.023542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.023558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.023628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.023643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.023733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.023748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.023886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.023902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.024060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.024076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.024216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.024233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.024313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.024330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.438 [2024-12-10 12:41:32.024410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.438 [2024-12-10 12:41:32.024427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.438 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.024635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.024650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.024871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.024888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.024953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.024969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.025123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.025139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.025306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.025323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.025432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.025449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.025563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.025579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.025773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.025803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.025932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.025965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.026064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.026094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.026256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.026275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.026431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.026448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.026585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.026602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.026700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.026717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.026858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.026874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.026989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.027006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.027177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.027199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.027284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.027300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.027454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.027470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.027696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.027712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.027858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.027877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.027958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.027975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.028076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.028093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.028178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.028195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.028344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.028372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.028455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.028471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.028642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.028658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.028734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.028751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.028922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.028938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.029021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.029036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.029178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.029211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.029353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.029370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.029543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.029560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.029848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.029865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.030003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.030020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.030186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.030204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.030366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.030383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.030488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.030505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.030671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.030687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.030765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.030781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.030936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.030952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.031187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.439 [2024-12-10 12:41:32.031204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.439 qpair failed and we were unable to recover it. 00:38:25.439 [2024-12-10 12:41:32.031359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.031375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.031589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.031605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.031826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.031842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.032018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.032034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.032249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.032266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.032380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.032408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.032564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.032593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.032718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.032745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.032847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.032866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.033040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.033057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.033137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.033153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.033309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.033327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.033468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.033485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.033568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.033585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.033682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.033698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.033853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.033870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.033954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.033971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.034064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.034081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.034174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.034195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.034277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.034294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.034368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.034384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.034464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.034480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.034625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.034642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.034799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.034816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.034901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.034918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.035069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.035086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.035189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.035207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.035294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.035337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.035495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.035512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.035659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.035675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.035779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.035797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.036023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.036039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.036203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.036220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.036426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.036443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.036602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.036618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.036764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.036780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.036870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.036887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.037139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.037177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.037367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.037384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.037557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.037573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.037733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.037750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.440 [2024-12-10 12:41:32.037903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.440 [2024-12-10 12:41:32.037924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.440 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.038025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.038041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.038193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.038209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.038309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.038324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.038471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.038498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.038749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.038778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.038983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.039008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.039116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.039133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.039237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.039254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.039402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.039418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.039585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.039600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.039766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.039798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.039992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.040008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.040096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.040113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.040317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.040334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.040499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.040515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.040755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.040771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.040985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.041006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.041153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.041175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.041331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.041347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.041451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.041467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.041709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.041726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.041882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.041898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.042063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.042080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.042311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.042328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.042431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.042449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.042610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.042628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.042796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.042813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.043018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.043034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.043178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.043195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.043335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.043351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.043524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.043556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.043705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.043722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.043919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.043936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.044148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.044172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.044336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.044353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.044510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.044527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.044666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.044683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.044844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.044860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.045068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.045085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.045316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.441 [2024-12-10 12:41:32.045334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.441 qpair failed and we were unable to recover it. 00:38:25.441 [2024-12-10 12:41:32.045553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.045570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.045711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.045727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.045940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.045957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.046136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.046162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.046377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.046401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.046495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.046517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.046619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.046640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.046806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.046828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.047057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.047080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.047300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.047324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.047495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.047518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.047708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.047730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.047969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.047988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.048091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.048108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.048200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.048217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.048296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.048313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.048388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.048407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.048581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.048598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.048737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.048754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.048927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.048944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.049034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.049050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.049194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.049211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.049361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.049378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.049581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.049597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.049731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.049747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.049830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.049847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.050019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.050036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.050195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.050213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.050360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.050377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.050541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.050557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.050828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.050844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.051001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.051017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.051176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.051193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.051282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.051299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.051476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.051493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.051645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.051667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.051835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.051852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.051952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.051968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.442 qpair failed and we were unable to recover it. 00:38:25.442 [2024-12-10 12:41:32.052118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.442 [2024-12-10 12:41:32.052134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.052300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.052317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.052456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.052473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.052642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.052658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.052737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.052753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.052845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.052874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.052994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.053020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.053185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.053215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.053454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.053472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.053616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.053633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.053711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.053728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.053812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.053829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.053912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.053929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.054859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.054876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.055929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.055947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.056021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.056038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.056191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.056207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.056405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.056421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.056561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.056576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.056658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.056674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.056852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.056868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.056946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.056963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.057120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.057137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.057220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.057238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.057377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.057394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.057479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.443 [2024-12-10 12:41:32.057495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.443 qpair failed and we were unable to recover it. 00:38:25.443 [2024-12-10 12:41:32.057559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.057575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.057733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.057750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.057897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.057913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.057994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.058020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.058138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.058163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.058438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.058462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.058564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.058582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.058721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.058738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.058969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.058985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.059121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.059138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.059281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.059298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.059388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.059405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.059503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.059519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.059608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.059624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.059708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.059724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.059872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.059889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.059977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.059995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.060138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.060155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.060298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.060314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.060411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.060428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.060580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.060596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.060685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.060702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.060866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.060882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.061055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.061071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.061179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.061198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.061269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.061291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.061448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.061465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.061704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.061720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.061813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.061830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.061966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.061982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.062127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.062143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.062301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.062319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.062409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.062425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.062504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.062520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.062606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.062622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.062713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.062730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.062817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.062834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.063036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.063053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.063186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.063203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.063381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.063398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.063561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.063578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.063666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.063682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.444 [2024-12-10 12:41:32.063846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.444 [2024-12-10 12:41:32.063862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.444 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.064000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.064019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.064155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.064177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.064258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.064275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.064437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.064453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.064550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.064566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.064670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.064686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.064756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.064772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.064871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.064887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.065039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.065055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.065129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.065145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.065307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.065324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.065415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.065431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.065518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.065534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.065676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.065692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.065836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.065852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.066074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.066090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.066243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.066261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.066413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.066429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.066513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.066529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.066619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.066636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.066785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.066802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.066904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.066920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.067084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.067100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.067268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.067284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.067381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.067397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.067477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.067493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.067566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.067582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.067678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.067694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.067762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.067778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.067915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.067930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.068160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.068184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.068340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.068355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.068529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.068545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.068626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.068642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.068719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.068735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.068883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.068898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.068967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.068983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.069137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.069153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.069318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.069344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.069453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.069480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.069575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.069605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.069749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.069766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.445 qpair failed and we were unable to recover it. 00:38:25.445 [2024-12-10 12:41:32.069968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.445 [2024-12-10 12:41:32.069984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.070053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.070068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.070144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.070159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.070318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.070334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.070416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.070432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.070580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.070596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.070753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.070770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.070921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.070941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.071160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.071188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.071353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.071368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.071450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.071466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.071538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.071554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.071650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.071665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.071822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.071838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.072047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.072062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.072173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.072190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.072263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.072279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.072433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.072448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.072555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.072571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.072732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.072748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.072847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.072863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.073929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.073993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.074008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.074088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.074104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.074245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.074261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.074333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.074349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.074555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.074570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.074724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.074739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.074821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.074837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.074931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.074947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.075038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.075056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.075144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.075159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.075343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.075360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.075577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.446 [2024-12-10 12:41:32.075593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.446 qpair failed and we were unable to recover it. 00:38:25.446 [2024-12-10 12:41:32.075749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.075764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.076014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.076030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.076113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.076129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.076234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.076250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.076426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.076442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.076584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.076600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.076672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.076689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.076789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.076805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.076980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.076997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.077206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.077223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.077361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.077377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.077524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.077540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.077676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.077691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.077838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.077854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.078039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.078054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.078307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.078323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.078478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.078494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.078638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.078654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.078736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.078751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.078840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.078856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.079010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.079026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.079211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.079226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.079451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.079467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.079615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.079632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.079787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.079803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.079883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.079899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.079976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.079992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.080151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.080172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.080284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.080301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.080451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.080471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.080648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.080664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.080804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.080820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.080919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.080935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.081090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.081106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.081212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.081229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.081331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.081346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.081447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.081464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.447 [2024-12-10 12:41:32.081551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.447 [2024-12-10 12:41:32.081565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.447 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.081650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.081665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.081755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.081770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.081857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.081872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.081949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.081964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.082048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.082063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.082145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.082160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.082334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.082350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.082492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.082508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.082583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.082599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.082680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.082696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.082779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.082794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.082877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.082893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.083032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.083047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.083199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.083215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.083356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.083372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.083519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.083534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.083737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.083752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.083837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.083853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.083923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.083939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.084024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.084040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.084216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.084232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.084373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.084389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.084536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.084551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.084634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.084650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.084735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.084751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.084815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.084831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.084901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.084916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.085141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.085158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.085271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.085289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.085370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.085386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.085474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.085489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.085554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.085569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.085706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.085721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.085871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.085916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.086191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.086235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.086380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.086423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.086734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.086777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.086971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.087015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.087238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.087290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.087432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.087475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.087738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.087781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.088048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.448 [2024-12-10 12:41:32.088091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.448 qpair failed and we were unable to recover it. 00:38:25.448 [2024-12-10 12:41:32.088295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.088340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.088553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.088567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.088727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.088770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.088972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.089016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.089234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.089279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.089540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.089583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.089729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.089772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.089882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.089896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.089985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.090000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.090210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.090226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.090404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.090448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.090650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.090693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.090844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.090900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.090986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.091008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.091225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.091240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.091378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.091393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.091539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.091582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.091894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.091938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.092158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.092231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.092470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.092513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.092783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.092826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.093035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.093078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.093385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.093429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.093667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.093711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.093918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.093963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.094249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.094294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.094443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.094487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.094656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.094672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.094774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.094788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.094881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.094895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.095057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.095100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.095276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.095322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.095631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.095673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.095888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.095933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.096139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.096195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.096326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.096369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.096654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.096704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.096848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.096863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.096997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.097012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.097156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.097210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.097494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.097538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.097811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.097854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.449 qpair failed and we were unable to recover it. 00:38:25.449 [2024-12-10 12:41:32.098074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.449 [2024-12-10 12:41:32.098116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.098404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.098451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.098688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.098731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.098947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.098962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.099096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.099126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.099352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.099396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.099536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.099580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.099740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.099795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.099977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.099993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.100098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.100142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.100299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.100343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.100614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.100662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.100748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.100763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.100916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.100964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.101228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.101274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.101484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.101528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.101744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.101787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.101971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.101986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.102210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.102255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.102467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.102510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.102730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.102773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.103002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.103017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.103210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.103256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.103411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.103454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.103650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.103694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.103886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.103900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.103975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.103989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.104093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.104136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.104306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.104352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.104626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.104669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.104870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.104913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.105223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.105269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.105435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.105477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.105683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.105726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.105889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.105907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.106050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.106070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.106223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.106238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.106412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.106455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.106589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.106632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.106839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.106882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.106996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.107012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.107096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.107110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.107259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.450 [2024-12-10 12:41:32.107304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.450 qpair failed and we were unable to recover it. 00:38:25.450 [2024-12-10 12:41:32.107502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.107545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.107758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.107800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.108002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.108016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.108197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.108242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.108398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.108441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.108713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.108757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.108946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.108967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.109119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.109134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.109244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.109260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.109465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.109480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.109618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.109634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.109701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.109716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.109972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.110015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.110225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.110270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.110564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.110606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.110837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.110887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.111033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.111048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.111288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.111332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.111528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.111543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.111623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.111638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.111814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.111840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.112053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.112096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.112265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.112309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.112573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.112618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.112766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.112808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.113038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.113082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.113230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.113275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.113500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.113543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.113683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.113725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.114006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.114049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.114324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.114370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.114665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.114714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.114951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.114995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.115148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.115203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.115522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.115566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.115751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.115777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.115856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.115871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.116021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.451 [2024-12-10 12:41:32.116036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.451 qpair failed and we were unable to recover it. 00:38:25.451 [2024-12-10 12:41:32.116208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.116251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.116537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.116580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.116801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.116815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.116963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.116977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.117137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.117191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.117399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.117442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.117730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.117768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.117903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.117917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.118155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.118177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.118345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.118389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.118533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.118577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.118729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.118772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.119052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.119096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.119344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.119389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.119549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.119593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.119819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.119862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.120080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.120095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.120196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.120229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.120293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.120326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.120473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.120488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.120637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.120652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.120808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.120851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.121047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.121090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.121241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.121286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.121483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.121527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.121801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.121816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.121946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.121961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.122122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.122175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.122383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.122427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.122637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.122679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.122808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.122823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.122916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.122931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.123001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.123052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.123270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.123319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.123588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.123643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.123793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.123808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.123901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.123916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.124124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.124139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.124384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.124400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.124551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.124567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.124708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.124723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.124898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.124913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.452 [2024-12-10 12:41:32.124996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.452 [2024-12-10 12:41:32.125011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.452 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.125177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.125192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.125390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.125405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.125568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.125582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.125786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.125801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.125894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.125909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.126040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.126056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.126264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.126323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.126471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.126515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.126723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.126766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.126901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.126916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.127098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.127115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.127250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.127266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.127443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.127457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.127615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.127660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.127874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.127917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.128137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.128209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.128371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.128414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.128701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.128789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.129069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.129117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.129412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.129461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.129626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.129644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.129747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.129790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.129940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.129984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.130195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.130241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.130448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.130491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.130707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.130750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.130893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.130908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.131101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.131143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.131438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.131482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.131690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.131734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.131946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.131995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.132216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.132263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.132401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.132443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.132639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.132682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.132942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.132984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.133223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.133268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.133534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.133576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.133785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.133829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.134112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.134155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.134298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.134343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.134548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.134591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.134734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.453 [2024-12-10 12:41:32.134749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.453 qpair failed and we were unable to recover it. 00:38:25.453 [2024-12-10 12:41:32.134883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.134903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.135128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.135143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.135391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.135408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.135608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.135625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.135760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.135775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.135923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.135939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.136092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.136134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.136428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.136472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.136707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.136721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.136890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.136934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.137063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.137106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.137372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.137417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.137608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.137650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.137938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.137980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.138113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.138156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.138388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.138433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.138697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.138740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.138891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.138935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.139068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.139111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.139372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.139418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.139630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.139675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.139843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.139887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.140106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.140122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.140300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.140345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.140566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.140611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.140810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.140854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.141005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.141048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.141314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.141358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.141505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.141555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.141758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.141802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.142040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.142055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.142147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.142162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.142329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.142371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.142520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.142563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.142700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.142743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.143010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.143053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.143262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.143308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.143502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.143708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.143735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.143830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.143845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.143933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.143948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.144034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.144072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.454 [2024-12-10 12:41:32.144254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.454 [2024-12-10 12:41:32.144299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.454 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.144516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.144559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.144784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.144828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.145025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.145069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.145219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.145235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.145346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.145361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.145512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.145556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.145707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.145749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.145898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.145941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.146079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.146094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.146175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.146191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.146341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.146357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.146501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.146516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.146708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.146798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.147041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.147092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.147253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.147300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.147507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.147552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.147776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.147821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.148025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.148069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.148209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.148254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.148452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.148496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.148777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.148824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.148964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.148993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.149232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.149256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.149359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.149380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.149587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.149604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.149767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.149784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.149890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.149934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.150072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.150116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.150326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.150370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.150571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.150614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.150746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.150789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.150915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.150958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.151109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.151124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.151349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.151407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.151621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.151664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.151799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.151847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.151943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.151958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.455 [2024-12-10 12:41:32.152090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.455 [2024-12-10 12:41:32.152128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.455 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.152275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.152320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.152474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.152517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.152723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.152766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.152977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.152992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.153146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.153161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.153309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.153324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.153544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.153559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.153624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.153639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.153709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.153724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.153867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.153881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.153952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.153967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.154035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.154049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.154119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.154133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.154200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.154215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.154418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.154433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.154515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.154530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.154598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.154613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.154685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.154699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.154841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.154856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.155025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.155067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.155283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.155328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.155538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.155581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.155784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.155799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.155887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.155902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.156035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.156050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.156233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.156278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.156431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.156672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.156722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.156940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.156983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.157097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.157112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.157248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.157263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.157332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.157346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.157484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.157498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.157657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.157672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.157841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.157887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.158193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.158238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.158377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.158421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.158640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.158683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.158867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.158882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.159024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.159066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.159279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.159324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.456 [2024-12-10 12:41:32.159532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.456 [2024-12-10 12:41:32.159576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.456 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.159741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.159756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.159919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.159961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.160120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.160164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.160369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.160412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.160633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.160677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.160885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.160929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.161042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.161058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.161135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.161149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.161303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.161347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.161565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.161608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.161757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.161800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.162029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.162072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.162390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.162440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.162707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.162749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.163013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.163028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.163114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.163129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.163268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.163284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.163436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.163455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.163687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.163731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.163862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.163905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.164042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.164086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.164227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.164242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.164390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.164406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.164562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.164577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.164715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.164755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.164978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.165022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.165274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.165320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.165519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.165560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.165758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.165801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.165974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.165988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.166063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.166078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.166243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.166259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.166397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.166412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.166501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.166515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.166673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.166715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.166929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.166971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.167178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.167223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.167443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.167488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.167697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.167747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.167952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.167966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.168047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.168063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.168140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.457 [2024-12-10 12:41:32.168155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.457 qpair failed and we were unable to recover it. 00:38:25.457 [2024-12-10 12:41:32.168372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.168416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.168606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.168649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.168854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.168869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.169007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.169022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.169106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.169120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.169259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.169275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.169343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.169358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.169419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.169433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.169595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.169610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.169680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.169882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.169931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.170090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.170133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.170487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.170540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.170751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.170794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.171005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.171019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.171229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.171274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.171498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.171541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.171810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.171854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.172050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.172093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.172388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.172433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.172643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.172687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.172819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.172862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.173052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.173066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.173217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.173233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.173369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.173383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.173461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.173475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.173610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.173625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.173759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.173774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.173943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.173986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.174254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.174298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.174501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.174544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.174811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.174853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.175134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.175194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.175301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.175315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.175503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.175546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.175750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.175792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.175941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.175984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.176206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.176237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.176308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.176323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.176459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.176478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.458 [2024-12-10 12:41:32.176745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.458 [2024-12-10 12:41:32.176789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.458 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.176998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.177041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.177251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.177266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.177420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.177435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.177591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.177604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.177746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.177790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.177918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.177961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.178164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.178218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.178421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.178464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.178669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.178712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.178919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.178967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.179162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.179218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.179423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.179466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.179613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.179655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.179798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.179840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.180035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.180077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.180348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.180392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.180543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.180586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.180780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.180824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.181104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.181148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.181422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.181467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.181780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.181821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.182058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.182073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.182158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.182179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.182281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.182296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.182497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.182513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.182699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.182741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.182874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.182917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.183047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.183088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.183231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.183274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.183489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.183532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.183747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.183790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.183936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.183979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.184190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.184205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.184301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.184316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.184405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.184420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.184620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.459 [2024-12-10 12:41:32.184634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.459 qpair failed and we were unable to recover it. 00:38:25.459 [2024-12-10 12:41:32.184874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.184916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.185084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.185126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.185457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.185500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.185724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.185769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.185903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.185918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.186012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.186026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.186189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.186204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.186357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.186401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.186699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.186744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.186927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.186941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.187075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.187089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.187302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.187317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.187398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.187413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.187643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.187693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.187900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.187943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.188100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.188142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.188357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.188373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.188574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.188588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.188814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.188829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.189055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.189069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.189222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.189238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.189393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.189436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.189650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.189693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.189907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.189955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.190086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.190100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.190302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.190348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.190552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.190608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.190847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.190890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.191143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.191195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.191462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.191506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.191713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.191756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.192008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.192049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.192153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.192182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.192408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.192451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.192658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.192701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.192844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.192893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.193026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.193041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.193218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.193263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.193554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.193597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.193812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.193855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.194018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.194061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.194278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.194323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.460 qpair failed and we were unable to recover it. 00:38:25.460 [2024-12-10 12:41:32.194518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.460 [2024-12-10 12:41:32.194560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.194836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.194851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.194995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.195010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.195194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.195238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.195519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.195563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.195766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.195790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.195879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.195893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.196031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.196045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.196137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.196177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.196403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.196447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.196710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.196752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.196842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.196859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.196946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.196962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.197122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.197189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.197477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.197521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.197661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.197704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.197932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.197975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.198194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.198241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.198410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.198424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.198624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.198639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.198790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.198836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.199056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.199098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.199309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.199354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.199653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.199696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.199900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.199915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.200070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.200085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.200250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.200293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.200556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.200600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.200917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.200960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.201125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.201181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.201470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.201513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.201817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.201859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.202068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.202112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.202345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.202391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.202542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.202585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.202802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.202817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.202895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.202910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.203051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.203065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.203305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.203351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.203504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.203548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.203755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.203798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.204030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.204045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.204216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.461 [2024-12-10 12:41:32.204261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.461 qpair failed and we were unable to recover it. 00:38:25.461 [2024-12-10 12:41:32.204497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.204539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.204773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.204816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.205080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.205123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.205239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.205254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.205478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.205522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.205754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.205818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.206018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.206062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.206345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.206390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.206675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.206725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.206867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.206911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.207143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.207198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.207355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.207400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.207684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.207727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.207988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.208003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.208238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.208253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.208404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.208418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.208516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.208531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.208690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.208732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.208995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.209038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.209348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.209392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.209620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.209664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.209813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.209856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.210120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.210134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.210271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.210286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.210434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.210448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.210621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.210636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.210726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.210741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.210901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.210944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.211205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.211250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.211490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.211532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.211744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.211788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.211916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.211959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.212148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.212162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.212380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.212395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.212543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.212558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.212656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.212671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.212823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.212848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.213017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.213060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.213327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.213372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.213641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.213684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.213978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.214022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.214197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.462 [2024-12-10 12:41:32.214213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.462 qpair failed and we were unable to recover it. 00:38:25.462 [2024-12-10 12:41:32.214282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.214297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.214523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.214538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.214619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.214633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.214741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.214756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.214929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.214944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.215048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.215090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.215246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.215297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.215518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.215562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.215855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.215899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.216055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.216071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.216143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.216158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.216363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.216379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.216460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.216474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.216621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.216635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.216720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.216735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.216909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.216924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.217074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.217089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.217155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.217178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.217318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.217334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.217543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.217558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.217728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.217743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.217910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.217952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.218234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.218279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.218559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.218575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.218807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.218823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.219069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.219085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.219237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.219253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.219390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.219410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.219557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.219572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.219707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.219722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.219965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.220006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.220152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.220205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.220348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.220390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.220615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.220630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.220771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.220785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.220863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.220878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.220982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.220996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.221222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.463 [2024-12-10 12:41:32.221238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.463 qpair failed and we were unable to recover it. 00:38:25.463 [2024-12-10 12:41:32.221324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.464 [2024-12-10 12:41:32.221340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.464 qpair failed and we were unable to recover it. 00:38:25.464 [2024-12-10 12:41:32.221466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.221513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.221742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.221790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.221976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.221992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.222134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.222149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.222305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.222321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.222403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.222417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.222565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.222580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.222724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.222741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.222885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.222900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.223047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.223062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.223212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.223228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.223431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.223446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.223646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.223661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.223754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.223769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.223931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.223946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.224115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.224130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.224291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.224306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.224462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.224477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.224650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.224665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.224754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.224769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.224971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.224986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.225212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.225229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.225434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.225449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.225528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.225543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.225690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.225705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.225787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.225802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.225983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.225997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.743 [2024-12-10 12:41:32.226215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.743 [2024-12-10 12:41:32.226230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.743 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.226395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.226410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.226504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.226518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.226597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.226612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.226744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.226759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.226848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.226863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.227005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.227020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.227213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.227302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.227630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.227684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.227834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.227880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.228023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.228046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.228214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.228237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.228473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.228517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.228825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.228962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.229004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.229182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.229226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.229498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.229654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.229696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.229895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.229939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.230132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.230155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.230340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.230368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.230480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.230502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.230737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.230755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.230969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.230984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.231135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.231318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.231333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.231403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.231417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.231567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.231582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.231766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.231808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.231952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.231995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.232285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.232331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.232569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.232612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.232826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.232869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.233002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.233044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.233210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.233254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.233459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.233503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.233714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.233758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.233959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.234001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.234141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.234155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.744 [2024-12-10 12:41:32.234419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.744 [2024-12-10 12:41:32.234463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.744 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.234587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.234629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.234860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.234904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.235188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.235244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.235389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.235432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.235628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.235670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.235933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.235976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.236137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.236193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.236476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.236532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.236827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.236874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.237036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.237089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.237190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.237214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.237407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.237451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.237762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.237806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.237962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.238006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.238241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.238287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.238502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.238547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.238760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.238804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.239100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.239145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.239349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.239372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.239520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.239537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.239683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.239702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.239851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.239865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.240077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.240093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.240252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.240295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.240457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.240500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.240645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.240688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.240945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.240960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.241109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.241152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.241395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.241440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.241582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.241625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.241861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.241904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.242115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.242158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.242331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.242346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.745 [2024-12-10 12:41:32.242436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.745 [2024-12-10 12:41:32.242472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.745 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.242680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.242724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.242880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.242923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.243197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.243212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.243365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.243379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.243526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.243568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.243760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.243802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.244093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.244140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.244365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.244410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.244704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.244750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.244990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.245035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.245193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.245217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.245307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.245328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.245588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.245630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.245874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.245964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.246266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.246292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.246553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.246598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.246797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.246840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.247069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.247112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.247412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.247457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.247679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.247724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.247989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.248033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.248197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.248246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.248411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.248428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.248596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.248639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.248882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.248924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.249062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.249105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.249361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.249379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.249517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.249532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.249674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.249716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.249999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.250041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.250343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.250387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.250549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.250592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.250718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.250761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.251020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.251062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.251370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.251415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.251644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.251687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.251849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.746 [2024-12-10 12:41:32.251893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.746 qpair failed and we were unable to recover it. 00:38:25.746 [2024-12-10 12:41:32.252113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.252155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.252323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.252367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.252558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.252601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.252809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.252860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.253009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.253051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.253207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.253252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.253458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.253502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.253697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.253739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.254033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.254075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.254331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.254347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.254494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.254537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.254754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.254796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.255062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.255105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.255405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.255450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.255595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.255638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.255832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.255875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.256180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.256225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.256459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.256501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.256714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.256757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.256986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.257029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.257165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.257185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.257410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.257454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.257738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.257794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.258002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.258046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.258192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.258207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.258304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.258360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.258652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.258695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.258833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.258875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.259116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.259131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.259300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.259318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.259459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.259502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.259791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.259834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.747 [2024-12-10 12:41:32.260086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.747 [2024-12-10 12:41:32.260101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.747 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.260282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.260327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.260547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.260590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.260736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.260780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.261058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.261101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.261336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.261381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.261662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.261705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.261915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.261958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.262192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.262237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.262376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.262419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.262647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.262692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.262900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.262943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.263160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.263215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.263480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.263524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.263738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.263780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.263981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.264024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.264310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.264355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.264517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.264532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.264613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.264628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.264762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.264777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.265007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.265022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.265153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.265201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.265335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.265378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.265615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.265657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.265890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.265934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.266222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.266266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.266497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.266512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.266661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.266676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.266836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.266879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.267095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.267138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.267353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.267396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.267660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.267704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.267991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.268034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.268249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.268294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.268557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.748 [2024-12-10 12:41:32.268600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.748 qpair failed and we were unable to recover it. 00:38:25.748 [2024-12-10 12:41:32.268904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.268947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.269148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.269217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.269439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.269487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.269636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.269679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.269872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.269915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.270137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.270151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.270314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.270329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.270529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.270571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.270716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.270758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.270986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.271029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.271246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.271290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.271487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.271529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.271728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.271772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.271996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.272010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.272258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.272303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.272499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.272542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.272789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.272831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.272962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.272977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.273158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.273213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.273360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.273651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.273695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.273833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.273876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.274071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.274114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.274266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.274311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.274567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.274611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.274835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.274877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.275073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.275116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.275412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.275427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.275601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.275617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.275810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.275825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.276032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.276047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.276182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.276198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.276375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.276417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.276627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.276671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.276871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.276913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.277120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.277162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.277479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.277522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.277735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.277779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.277915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.749 [2024-12-10 12:41:32.277958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.749 qpair failed and we were unable to recover it. 00:38:25.749 [2024-12-10 12:41:32.278108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.278153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.278255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.278270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.278442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.278456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.278723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.278764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.278966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.279010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.279222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.279267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.279508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.279523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.279589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.279603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.279770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.279785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.279932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.279946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.280174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.280205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.280340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.280356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.280449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.280464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.280550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.280564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.280647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.280662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.280808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.280850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.281014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.281057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.281321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.281337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.281492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.281507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.281584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.281598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.281800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.281814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.282018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.282033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.282185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.282200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.282346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.282361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.282456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.282474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.282680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.282695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.282778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.282793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.282957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.282972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.283072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.283086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.283239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.283254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.283335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.283351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.283502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.283516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.283651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.283665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.283735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.283749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.283841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.283856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.283999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.284013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.284149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.284163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.284281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.284308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.284451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.750 [2024-12-10 12:41:32.284494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.750 qpair failed and we were unable to recover it. 00:38:25.750 [2024-12-10 12:41:32.284636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.284679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.284942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.284984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.285212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.285257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.285478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.285523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.285764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.285810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.286031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.286086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.286295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.286340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.286568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.286583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.286684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.286698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.286895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.286938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.287097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.287140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.287425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.287479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.287649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.287664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.287895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.287938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.288148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.288204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.288405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.288420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.288621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.288665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.288898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.288941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.289184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.289231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.289439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.289481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.289707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.289750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.290038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.290082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.290372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.290417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.290575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.290618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.290829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.290873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.290969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.290985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.291223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.291266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.291506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.291548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.291684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.291727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.291957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.292000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.292290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.292306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.292460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.292477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.292631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.292647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.292744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.292759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.292850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.292865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.293076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.293119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.293389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.293435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.293650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.293693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.293897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.751 [2024-12-10 12:41:32.293941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.751 qpair failed and we were unable to recover it. 00:38:25.751 [2024-12-10 12:41:32.294243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.294258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.294488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.294531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.294687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.294730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.294963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.295007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.295243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.295287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.295494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.295537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.295832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.295877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.296152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.296208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.296418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.296462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.296687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.296729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.296966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.297009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.297272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.297288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.297516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.297531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.297617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.297632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.297798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.297813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.297907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.297922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.298102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.298144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.298349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.298393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.298549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.298592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.298861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.298904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.299162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.299219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.299345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.299359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.299515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.299529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.299684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.299727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.300013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.300055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.300199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.300239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.300438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.300453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.300535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.300550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.300754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.300768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.300934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.300978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.301118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.301205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.301362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.301405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.301715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.301764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.752 qpair failed and we were unable to recover it. 00:38:25.752 [2024-12-10 12:41:32.301980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.752 [2024-12-10 12:41:32.302023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.302254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.302307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.302555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.302595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.302794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.302836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.303057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.303310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.303325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.303468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.303511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.303739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.303782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.304020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.304058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.304188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.304203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.304352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.304395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.304625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.304668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.304824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.304867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.305107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.305122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.305213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.305229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.305306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.305321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.305482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.305496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.305636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.305651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.305858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.305883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.305983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.305998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.306072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.306086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.306243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.306288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.306522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.306564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.306780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.306824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.307044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.307087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.307386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.307401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.307534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.307549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.307758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.307800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.308023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.308066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.308301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.308317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.308412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.308428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.308573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.308616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.308844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.308887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.309101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.309144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.309375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.309419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.309634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.309677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.309910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.309952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.310161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.310216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.310458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.310472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.753 qpair failed and we were unable to recover it. 00:38:25.753 [2024-12-10 12:41:32.310567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.753 [2024-12-10 12:41:32.310585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.310812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.310827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.310987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.311005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.311096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.311111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.311208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.311223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.311354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.311368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.311514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.311557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.311752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.311796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.312011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.312054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.312211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.312256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.312386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.312429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.312575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.312618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.312902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.312945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.313110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.313153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.313453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.313496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.313622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.313665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.313949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.313992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.314237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.314252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.314397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.314412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.314577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.314592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.314704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.314747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.314972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.315016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.315304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.315366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.315510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.315553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.315821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.315864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.316093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.316136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.316285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.316300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.316540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.316584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.316743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.316786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.317067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.317110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.317327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.317343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.317503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.317546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.317776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.317819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.317946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.317989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.318195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.318211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.318296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.318337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.318625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.318668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.318792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.318835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.318970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.319013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.754 [2024-12-10 12:41:32.319237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.754 [2024-12-10 12:41:32.319252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.754 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.319361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.319377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.319606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.319649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.319846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.319890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.320120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.320164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.320368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.320412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.320623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.320665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.320959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.321002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.321203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.321247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.321376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.321390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.321471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.321486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.321720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.321763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.322027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.322070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.322187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.322219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.322395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.322439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.322586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.322631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.322850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.322893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.323111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.323152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.323329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.323381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.323559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.323574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.323735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.323778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.323971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.324014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.324244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.324289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.324437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.324480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.324683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.324726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.324974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.325019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.325236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.325280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.325535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.325550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.325714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.325758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.326029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.326072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.326357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.326373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.326465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.326480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.326694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.326737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.326879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.326923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.327149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.327200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.327348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.327363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.327607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.327649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.327955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.327998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.328222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.328248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.755 qpair failed and we were unable to recover it. 00:38:25.755 [2024-12-10 12:41:32.328405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.755 [2024-12-10 12:41:32.328420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.328630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.328673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.328871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.328920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.329152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.329207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.329276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.329291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.329510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.329553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.329773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.329817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.330042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.330085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.330366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.330427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.330655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.330670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.330816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.330832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.330978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.331021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.331251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.331295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.331587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.331630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.331796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.331840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.332000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.332015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.332185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.332217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.332432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.332475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.332714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.332758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.332962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.333004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.333245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.333290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.333578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.333592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.333745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.333760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.333860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.333875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.334018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.334033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.334190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.334233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.334446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.334490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.334704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.334748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.334986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.335030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.335161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.335186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.335355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.335398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.335622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.335665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.335900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.335942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.336078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.336120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.336278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.756 [2024-12-10 12:41:32.336293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.756 qpair failed and we were unable to recover it. 00:38:25.756 [2024-12-10 12:41:32.336457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.336472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.336616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.336630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.336857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.336899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.337108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.337151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.337380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.337423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.337632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.337675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.337938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.337981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.338077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.338094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.338298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.338314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.338410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.338424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.338578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.338621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.338855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.338898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.339133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.339194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.339333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.339376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.339589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.339604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.339780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.339795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.340014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.340055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.340201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.340246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.340391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.340435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.340718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.340733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.340906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.340920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.341020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.341062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.341266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.341311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.341506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.341548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.341711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.341754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.341953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.341996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.342267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.342312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.342529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.342544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.342723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.342767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.343000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.343042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.343303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.343348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.343573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.343588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.343734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.343748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.343860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.343903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.343993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:38:25.757 [2024-12-10 12:41:32.344488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.344536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.344664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.344690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.344788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.344811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.345064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.345108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.757 qpair failed and we were unable to recover it. 00:38:25.757 [2024-12-10 12:41:32.345336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.757 [2024-12-10 12:41:32.345381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.345530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.345575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.345788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.345832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.346043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.346086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.346279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.346303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.346476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.346524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.346787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.346831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.347106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.347150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.347359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.347393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.347557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.347601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.347826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.347869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.348073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.348116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.348286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.348301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.348489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.348504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.348637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.348652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.348839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.348881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.349164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.349219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.349535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.349578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.349805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.349848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.350082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.350125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.350271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.350316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.350533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.350576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.350712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.350761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.351041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.351084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.351373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.351418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.351688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.351732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.351887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.351930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.352140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.352192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.352492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.352535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.352776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.352820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.353031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.353068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.353219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.353235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.353500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.353543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.353816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.353859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.354006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.354048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.354225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.354240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.354393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.354409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.354621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.354664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.354871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.354914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.758 [2024-12-10 12:41:32.355067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.758 [2024-12-10 12:41:32.355110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.758 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.355347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.355362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.355623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.355665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.355891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.355934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.356146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.356204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.356466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.356481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.356577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.356592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.356723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.356760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.356963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.357006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.357161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.357217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.357353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.357368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.357620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.357665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.357806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.357850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.358134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.358187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.358395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.358439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.358637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.358681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.358904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.358919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.359080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.359123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.359347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.359391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.359601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.359644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.359869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.359913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.360057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.360099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.360272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.360287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.360455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.360503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.360701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.360744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.361031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.361074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.361370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.361416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.361608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.361652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.361942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.361984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.362271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.362287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.362383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.362398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.362629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.362671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.362887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.362931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.363193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.363250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.363513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.363564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.363768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.363782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.363955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.363970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.364217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.364262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.364429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.364444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.759 [2024-12-10 12:41:32.364586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.759 [2024-12-10 12:41:32.364629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.759 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.364848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.364891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.365043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.365086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.365302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.365347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.365561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.365605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.365842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.365885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.366103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.366146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.366402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.366446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.366710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.366753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.367027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.367070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.367267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.367316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.367526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.367542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.367713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.367756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.367949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.367992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.368268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.368319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.368553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.368567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.368789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.368804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.368969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.368983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.369117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.369132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.369295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.369339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.369581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.369624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.369777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.369820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.370093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.370137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.370445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.370489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.370728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.370779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.370982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.371025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.371242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.371286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.371567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.371609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.371878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.371921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.372127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.372180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.372399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.372445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.372584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.372598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.372807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.372822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.373061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.373104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.373319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.373363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.373491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.373533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.373771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.373786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.373955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.373970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.374163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.374183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.760 [2024-12-10 12:41:32.374354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.760 [2024-12-10 12:41:32.374369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.760 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.374455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.374470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.374700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.374715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.374807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.374822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.374962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.374977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.375070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.375085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.375238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.375254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.375455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.375499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.375733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.375775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.375989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.376032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.376217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.376232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.376396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.376437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.376655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.376700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.376978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.377021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.377164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.377217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.377368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.377410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.377603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.377618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.377721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.377763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.377961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.378008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.378223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.378290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.378378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.378393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.378466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.378482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.378577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.378621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.378815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.378858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.379064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.379107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.379344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.379363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.379513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.379555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.379845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.379888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.380093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.380136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.380351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.380366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.380498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.380546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.380759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.380802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.381009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.381053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.381278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.381294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.761 [2024-12-10 12:41:32.381493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.761 [2024-12-10 12:41:32.381508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.761 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.381592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.381607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.381842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.381857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.381995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.382038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.382302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.382347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.382508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.382523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.382733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.382748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.383015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.383058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.383320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.383364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.383561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.383604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.383824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.383868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.384102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.384144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.384427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.384471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.384580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.384595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.384668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.384682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.384758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.384809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.385042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.385086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.385303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.385352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.385490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.385505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.385679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.385721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.386016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.386060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.386209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.386253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.386415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.386458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.386732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.386775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.387037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.387079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.387285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.387330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.387551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.387595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.387826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.387869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.388082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.388125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.388343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.388387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.388600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.388642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.388844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.388861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.389071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.389114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.389404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.389449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.389667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.389683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.389781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.389796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.390068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.390111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.390415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.390432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.390584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.390599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.762 [2024-12-10 12:41:32.390689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.762 [2024-12-10 12:41:32.390704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.762 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.390886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.390928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.391215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.391260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.391549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.391592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.391737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.391781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.391912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.391954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.392233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.392249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.392452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.392467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.392549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.392564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.392730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.392745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.392881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.392905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.393061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.393104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.393257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.393301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.393568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.393612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.393822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.393864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.394015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.394059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.394347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.394392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.394553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.394596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.394760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.394775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.394949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.394964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.395036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.395051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.395121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.395136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.395225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.395241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.395396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.395410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.395488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.395503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.395596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.395612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.395760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.395802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.396061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.396104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.396262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.396303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.396397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.396412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.396651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.396696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.396982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.397026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.397226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.397277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.397545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.397587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.397831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.397846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.398074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.398089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.398326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.398341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.398445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.398488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.398698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.398740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.398951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.763 [2024-12-10 12:41:32.398994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.763 qpair failed and we were unable to recover it. 00:38:25.763 [2024-12-10 12:41:32.399255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.399300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.399460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.399506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.399594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.399608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.399820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.399862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.400021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.400064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.400205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.400249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.400459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.400474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.400639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.400682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.400884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.400926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.401139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.401193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.401457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.401472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.401644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.401659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.401727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.401772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.401984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.402028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.402263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.402308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.402452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.402466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.402619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.402634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.402770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.402784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.402933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.402976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.403275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.403365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.403605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.403632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.403803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.403826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.404070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.404115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.404280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.404327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.404544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.404589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.404850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.404895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.405114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.405159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.405422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.405467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.405777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.405822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.406078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.406139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.406370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.406393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.406510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.406532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.406707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.406736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.406896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.406919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.407052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.407097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.407245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.407289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.407505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.407549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.407703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.407725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.407824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.764 [2024-12-10 12:41:32.407847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.764 qpair failed and we were unable to recover it. 00:38:25.764 [2024-12-10 12:41:32.407945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.407968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.408245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.408294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.408516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.408559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.408748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.408763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.408901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.408916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.409091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.409107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.409321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.409366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.409524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.409568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.409780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.409823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.410082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.410125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.410296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.410343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.410557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.410601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.410872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.410916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.411075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.411119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.411408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.411453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.411653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.411676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.411833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.411879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.412088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.412130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.412443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.412532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.412758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.412804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.413179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.413250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.413497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.413513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.413695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.413710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.413863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.413878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.413983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.414036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.414260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.414315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.414560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.414608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.414789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.414842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.415057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.415102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.415383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.415429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.415745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.415791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.416003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.416048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.416266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.416312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.416529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.416582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.765 [2024-12-10 12:41:32.416880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.765 [2024-12-10 12:41:32.416925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.765 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.417188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.417232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.417515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.417559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.417694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.417737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.417947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.417992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.418225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.418270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.418560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.418604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.418844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.418888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.419093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.419139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.419455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.419479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.419575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.419598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.419745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.419768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.419952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.419975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.420070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.420085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.420266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.420311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.420469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.420511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.420650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.420693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.421011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.421054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.421288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.421556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.421598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.421864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.421907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.422134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.422186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.422407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.422448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.422706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.422721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.422869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.422884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.423032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.423089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.423267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.423316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.423647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.423696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.423913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.423957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.424159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.424239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.424522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.424544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.424771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.424794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.424950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.424967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.425150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.425203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.425420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.425462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.425702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.425745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.425954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.425996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.426229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.426274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.426500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.426548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.766 qpair failed and we were unable to recover it. 00:38:25.766 [2024-12-10 12:41:32.426864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.766 [2024-12-10 12:41:32.426919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.427129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.427183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.427464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.427486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.427752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.427796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.428013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.428056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.428211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.428257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.428425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.428449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.428628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.428671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.428874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.428918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.429068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.429113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.429339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.429385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.429581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.429625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.429829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.429853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.430049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.430094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.430349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.430395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.430679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.430722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.430937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.430980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.431262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.431308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.431597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.431640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.431901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.431946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.432179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.432223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.432499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.432522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.432689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.432711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.432875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.432925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.433065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.433114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.433347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.433402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.433625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.433648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.433808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.433824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.433972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.434015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.434238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.434284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.434507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.434551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.434840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.434885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.435028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.435071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.435336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.435382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.435588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.435632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.435868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.435890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.436074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.436097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.436190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.436213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.436392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.436437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.767 [2024-12-10 12:41:32.436623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.767 [2024-12-10 12:41:32.436670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.767 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.436817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.436868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.437075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.437118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.437284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.437330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.437475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.437519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.437630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.437652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.437899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.437945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.438234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.438279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.438429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.438445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.438582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.438596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.438681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.438695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.438877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.438919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.439180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.439225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.439435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.439479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.439637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.439680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.439836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.439880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.440142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.440193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.440441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.440486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.440766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.440795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.440892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.440914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.441133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.441155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.441395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.441441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.441602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.441646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.441843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.441885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.442141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.442196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.442407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.442451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.442650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.442693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.442883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.442898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.443041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.443100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.443258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.443306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.443468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.443516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.443718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.443741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.443898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.443940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.444151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.444206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.444505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.444547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.444852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.444895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.445062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.445105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.445321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.445365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.445517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.445540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.768 [2024-12-10 12:41:32.445726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.768 [2024-12-10 12:41:32.445770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.768 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.445968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.446010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.446159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.446221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.446424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.446469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.446753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.446795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.446923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.446965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.447198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.447244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.447451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.447474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.447627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.447674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.447819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.447863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.448022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.448066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.448331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.448377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.448637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.448691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.448953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.448968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.449208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.449254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.449452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.449495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.449669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.449684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.449859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.449874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.450004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.450019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.450173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.450189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.450337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.450352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.450554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.450569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.450679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.450694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.450792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.450842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.451044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.451087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.451356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.451401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.451580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.451595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.451702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.451746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.451947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.451990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.452290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.452339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.452486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.452530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.452746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.452788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.453077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.453121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.453342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.453386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.453603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.453645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.453793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.453814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.454066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.454111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.454357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.454402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.454557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.454598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.769 [2024-12-10 12:41:32.454733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.769 [2024-12-10 12:41:32.454755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.769 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.454863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.454885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.455031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.455048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.455287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.455332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.455541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.455555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.455700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.455743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.455977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.456022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.456262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.456320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.456568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.456612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.456755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.456797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.457084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.457128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.457379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.457426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.457645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.457731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.458115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.458216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.458528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.458573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.458774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.458817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.459032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.459075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.459300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.459345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.459603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.459617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.459708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.459723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.459858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.770 [2024-12-10 12:41:32.459896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.770 qpair failed and we were unable to recover it. 00:38:25.770 [2024-12-10 12:41:32.460112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.460155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.460385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.460429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.460637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.460680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.460968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.460983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.461214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.461259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.461545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.461587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.461810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.461854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.462056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.462098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.462371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.462415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.462675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.462694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.462878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.462893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.463144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.463197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.463432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.463475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.463687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.463734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.463947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.463960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.464127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.464141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.464300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.464345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.464497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.464539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.464825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.464868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.465083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.465127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.465428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.465518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.465781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.465827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.466051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.466095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.466347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.466392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.466558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.466602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.466842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.466865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.467013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.467030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.467111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.467126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.467347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.467363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.467543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.467557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.467654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.467697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.467928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.467973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.468207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.468254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.468437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.468452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.468603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.468645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.468835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.468878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.469103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.469146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.469354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.469397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.469620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.771 [2024-12-10 12:41:32.469663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.771 qpair failed and we were unable to recover it. 00:38:25.771 [2024-12-10 12:41:32.469863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.469907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.470054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.470097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.470248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.470294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.470508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.470551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.470830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.470873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.471107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.471149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.471486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.471531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.471674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.471717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.471872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.471915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.472060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.472104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.472272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.472325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.472569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.472612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.472794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.472809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.472955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.472969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.473148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.473204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.473403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.473445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.473658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.473703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.473822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.473837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.474068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.474110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.474400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.474450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.474607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.474663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.474871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.474913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.475125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.475194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.475417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.475460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.475728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.475743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.475845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.475861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.476019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.476035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.476220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.476236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.476390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.476433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.476585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.476629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.476844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.476888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.477179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.477224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.477492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.477535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.477754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.477797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.477985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.478000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.478207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.478251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.478478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.478522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.478726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.478741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.772 [2024-12-10 12:41:32.478883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.772 [2024-12-10 12:41:32.478929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.772 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.479140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.479194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.479347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.479390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.479654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.479697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.479901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.479915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.480046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.480061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.480242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.480258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.480417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.480459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.480597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.480640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.480872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.480921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.481015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.481030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.481116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.481130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.481289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.481308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.481547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.481590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.481786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.481829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.481989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.482032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.482364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.482409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.482625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.482668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.482874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.482917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.483229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.483274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.483481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.483496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.483591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.483605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.483856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.483900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.484129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.484187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.484469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.484512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.484608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.484622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.484766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.484811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.484964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.485009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.485293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.485338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.485512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.485526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.485685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.485728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.485873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.485915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.486122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.486165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.486316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.486358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.486555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.486598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.486865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.486908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.487108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.487151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.487449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.487492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.487757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.487800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.488032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.773 [2024-12-10 12:41:32.488047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.773 qpair failed and we were unable to recover it. 00:38:25.773 [2024-12-10 12:41:32.488202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.488234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.488401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.488445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.488578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.488620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.488942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.488985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.489204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.489251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.489490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.489532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.489702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.489721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.489877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.489920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.490128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.490180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.490326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.490369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.490573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.490617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.490894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.490937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.491139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.491198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.491411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.491454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.491717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.491760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.491982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.491997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.492170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.492185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.492286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.492300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.492401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.492417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.492574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.492615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.492823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.492866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.493125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.493194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.493357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.493400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.493618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.493661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.493868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.493882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.494070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.494112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.494371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.494417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.494654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.494697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.494891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.494933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.495183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.495227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.495494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.495537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.495813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.495855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.496059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.496102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.496339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.496383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.496531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.496546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.496719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.496762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.497054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.497097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.497268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.497313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.774 qpair failed and we were unable to recover it. 00:38:25.774 [2024-12-10 12:41:32.497450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.774 [2024-12-10 12:41:32.497464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.497607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.497622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.497899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.497942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.498137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.498193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.498454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.498498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.498734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.498777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.498972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.498986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.499138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.499152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.499379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.499424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.499627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.499670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.499870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.499921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.500210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.500255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.500470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.500513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.500724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.500767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.500928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.500945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.501189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.501234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.501382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.501425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.501734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.501777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.501920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.501963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.502183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.502228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.502493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.502537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.502739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.502782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.502994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.503008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.503215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.503230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.503324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.503338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.503452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.503495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.503766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.503809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.504010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.504024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.504257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.504301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.504590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.504604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.504672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.504687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.504762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.504777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.504890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.504944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.505093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.505137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.505304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.505348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.775 [2024-12-10 12:41:32.505579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.775 [2024-12-10 12:41:32.505622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.775 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.505768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.505809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.506007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.506048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.506274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.506317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.506462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.506504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.506718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.506761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.507058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.507101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.507312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.507357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.507595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.507637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.507835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.507878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.508142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.508197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.508400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.508443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.508702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.508716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.508854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.508868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.509110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.509153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.509318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.509361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.509565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.509609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.509820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.509835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.510017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.510059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.510266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.510318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.510534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.510576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.510696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.510711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.510868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.510910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.511189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.511234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.511491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.511534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.511756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.511799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.511963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.512006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.512242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.512287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.512488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.512530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.512752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.512767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.512951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.512994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.513208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.513253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.513477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.513521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.513712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.513734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.513923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.513967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.514158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.514215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.514476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.514519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.514732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.514774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.515060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.776 [2024-12-10 12:41:32.515075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.776 qpair failed and we were unable to recover it. 00:38:25.776 [2024-12-10 12:41:32.515236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.515252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.515332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.515346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.515502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.515517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.515586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.515618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.515773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.515816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.516014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.516056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.516284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.516329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.516626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.516670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.516926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.516941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.517077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.517092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.517262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.517277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.517442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.517456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.517609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.517624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.517784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.517825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.518027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.518069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.518207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.518250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.518395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.518439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.518661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.518705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.518974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.518989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.519165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.519186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.519284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.519345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.519527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.519572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.519709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.519763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.519912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.519954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.520216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.520243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.520385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.520400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.520608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.520623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.520737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.520752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.520895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.520909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.521004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.521018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.521227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.521271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.521485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.521528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.521681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.521721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.521857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.521872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.521961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.521976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.522042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.522056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.522208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.522224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.522372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.522387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.522470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.777 [2024-12-10 12:41:32.522509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.777 qpair failed and we were unable to recover it. 00:38:25.777 [2024-12-10 12:41:32.522709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.522753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.522958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.523002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.523205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.523248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.523475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.523520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.523748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.523792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.523941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.523955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.524101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.524116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.524249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.524265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.524415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.524430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.524568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.524583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.524647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.524661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.524802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.524828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.524989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.525030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.525235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.525279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.525490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.525532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.525752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.525767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.525935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.525951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.526022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.526037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.526264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.526279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.526425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.526440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.526532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.526546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.526746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.526796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.527009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.527052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.527290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.527336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.527561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.527605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.527862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.527914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.528157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.528178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.528355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.528371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.528540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.528583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.528802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.528844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.529123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.529177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.529491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.529534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.529743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.529786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.529991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.530034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.530263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.530308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.530449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.530492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.530695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.530739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.530856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.530871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.531094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.778 [2024-12-10 12:41:32.531137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.778 qpair failed and we were unable to recover it. 00:38:25.778 [2024-12-10 12:41:32.531440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.531485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.531674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.531689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.531834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.531848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.532117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.532160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.532299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.532343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.532570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.532614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.532900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.532944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.533157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.533237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.533433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.533489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.533651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.533695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.533911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.533953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.534132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.534147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.534222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.534237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.534392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.534407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.534487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.534529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.534731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.534773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.534963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.535005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.535151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.535205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.535446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.535489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.535784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.535826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.536108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.536122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.536350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.536367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.536525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.536575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.536794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.536837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.537046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.537060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.537243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.537288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.537538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.537580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.537818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.537861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.538134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.538194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.538349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.538392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.538684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.538726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.538926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.538940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.539148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.539201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.539490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.539533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.779 [2024-12-10 12:41:32.539820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.779 [2024-12-10 12:41:32.539834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.779 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.540047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.540090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.540329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.540374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.540568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.540612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.540824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.540868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.541073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.541115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.541334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.541379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.541584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.541627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.541893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.541935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.542072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.542086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.542301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.542317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.542448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.542463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.542604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.542645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.542817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.542859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.543068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.543111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.543476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.543564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.543860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.543948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.544252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.544342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.544644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.544691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.544910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.544954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.545207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.545222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.545456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.545498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.545713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.545757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.545903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.545918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.546146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.546160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.546249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.546263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.546416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.546431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.546585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.546599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.546735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.546752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.546916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.546959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.547213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.547257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.547563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.547606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.547819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.547861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.548009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.548025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.548236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.548252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.548401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.548415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.548580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.548595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:25.780 [2024-12-10 12:41:32.548748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.780 [2024-12-10 12:41:32.548763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.780 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.548910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.548925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.549009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.549025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.549177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.549194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.549269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.549289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.549382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.549397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.549538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.549553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.549753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.549769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.549856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.549871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.549947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.549962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.550048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.550063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.550262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.550278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.550360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.550375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.550601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.550616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.550769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.550783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.550872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.550886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.550951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.550965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.551102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.551117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.551226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.551256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.551460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.551490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.551614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.551644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.551809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.060 [2024-12-10 12:41:32.551826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.060 qpair failed and we were unable to recover it. 00:38:26.060 [2024-12-10 12:41:32.551963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.551978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.552063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.552078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.552150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.552164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.552267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.552282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.552485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.552500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.552579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.552594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.552689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.552704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.552866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.552880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.552964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.552979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.553058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.553074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.553273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.553289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.553425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.553439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.553578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.553593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.553668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.553683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.553786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.553801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.553868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.553883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.554019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.554034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.554120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.554134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.554270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.554286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.554367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.554409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.554669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.554713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.554837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.554880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.555112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.555127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.555287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.555302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.555445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.555459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.555605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.555620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.555703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.555748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.555978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.556022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.556156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.556211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.556421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.556464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.556618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.556660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.556857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.556901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.557186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.557202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.557356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.557371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.557525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.557540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.557693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.557708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.557822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.557848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.558101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.558153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.061 qpair failed and we were unable to recover it. 00:38:26.061 [2024-12-10 12:41:32.558328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.061 [2024-12-10 12:41:32.558382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.558658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.558703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.558842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.558885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.559108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.559151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.559439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.559484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.559619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.559661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.559925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.559967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.560177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.560193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.560268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.560307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.560503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.560544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.560834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.560877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.561137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.561155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.561313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.561327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.561562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.561607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.561814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.561870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.562072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.562087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.562239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.562255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.562439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.562481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.562637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.562679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.562873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.562914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.563155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.563185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.563281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.563296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.563513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.563551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.563761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.563776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.563936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.563950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.564191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.564237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.564432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.564474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.564685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.564728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.565041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.565084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.565256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.565300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.565494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.565537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.565722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.565737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.565948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.565991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.566225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.566270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.566523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.566567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.566774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.566817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.566969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.567012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.567202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.567245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.567450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.062 [2024-12-10 12:41:32.567499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.062 qpair failed and we were unable to recover it. 00:38:26.062 [2024-12-10 12:41:32.567704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.567749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.567963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.568008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.568195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.568220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.568396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.568440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.568737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.568780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.568999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.569044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.569252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.569298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.569449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.569493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.569688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.569732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.569961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.570006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.570205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.570250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.570415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.570458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.570666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.570711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.570923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.570967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.571231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.571275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.571488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.571532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.571824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.571868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.572104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.572121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.572370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.572386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.572488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.572503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.572721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.572736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.572800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.572814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.573000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.573043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.573270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.573315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.573461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.573504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.573779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.573793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.573875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.573890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.573958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.573973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.574110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.574152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.574429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.574473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.574670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.574713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.574912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.574927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.575091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.575134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.575426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.575470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.575609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.575651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.575764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.575779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.575893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.575908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.576054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.576068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.063 [2024-12-10 12:41:32.576145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.063 [2024-12-10 12:41:32.576159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.063 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.576412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.576462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.576673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.576715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.576826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.576840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.577095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.577137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.577416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.577459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.577662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.577705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.577910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.577954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.578241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.578286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.578502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.578545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.578700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.578742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.578944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.578960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.579103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.579145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.579318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.579360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.579573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.579616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.579784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.579798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.579951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.579995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.580156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.580213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.580504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.580548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.580756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.580812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.581039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.581083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.581320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.581363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.581562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.581605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.581805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.581820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.582003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.582046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.582261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.582306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.582514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.582557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.582757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.582801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.583110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.583154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.583306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.583350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.583541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.583584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.583720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.583761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.583923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.064 [2024-12-10 12:41:32.583937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.064 qpair failed and we were unable to recover it. 00:38:26.064 [2024-12-10 12:41:32.584031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.584045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.584229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.584244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.584448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.584462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.584690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.584705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.584906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.584920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.585155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.585174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.585321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.585336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.585472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.585487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.585727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.585775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.586089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.586133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.586403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.586446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.586698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.586742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.587007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.587049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.587210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.587255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.587415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.587459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.587746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.587789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.587931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.587975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.588255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.588271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.588477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.588492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.588652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.588666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.588876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.588918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.589146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.589199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.589365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.589409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.589606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.589650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.589836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.589850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.590005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.590019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.590250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.590265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.590402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.590422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.590518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.590572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.590707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.590749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.590877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.590919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.591068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.591114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.591258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.591274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.591439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.591482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.591756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.591800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.592002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.592016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.592085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.592100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.065 [2024-12-10 12:41:32.592266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.065 [2024-12-10 12:41:32.592312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.065 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.592531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.592574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.592848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.592891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.593084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.593098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.593368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.593411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.593633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.593676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.593874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.593916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.594013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.594028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.594173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.594219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.594373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.594416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.594700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.594743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.594898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.594948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.595145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.595159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.595416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.595509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.595699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.595742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.595961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.596003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.596190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.596206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.596392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.596434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.596574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.596617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.596931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.596973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.597093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.597107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.597243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.597258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.597472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.597487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.597641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.597655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.597842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.597886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.598085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.598129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.598269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.598313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.598504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.598547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.598832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.598874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.599101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.599144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.599463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.599507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.599732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.599775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.600078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.600130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.600251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.600268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.600436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.600450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.600528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.600542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.600713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.600755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.600953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.600997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.601232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.066 [2024-12-10 12:41:32.601277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.066 qpair failed and we were unable to recover it. 00:38:26.066 [2024-12-10 12:41:32.601486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.601529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.601728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.601771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.601978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.601993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.602190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.602235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.602375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.602418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.602563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.602605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.602758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.602801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.602994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.603037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.603172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.603187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.603417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.603460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.603601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.603645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.603836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.603851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.604005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.604022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.604215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.604230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.604367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.604382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.604541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.604584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.604855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.604898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.605089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.605133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.605416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.605460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.605584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.605627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.605872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.605886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.605965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.605980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.606114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.606129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.606331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.606346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.606443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.606458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.606546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.606562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.606667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.606682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.606820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.606835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.606917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.606931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.607135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.607150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.607297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.607312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.607467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.607482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.607557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.607572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.607726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.607741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.607808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.607823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.607911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.607926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.608006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.608046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.608257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.608304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.067 [2024-12-10 12:41:32.608567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.067 [2024-12-10 12:41:32.608623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.067 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.608883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.608898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.609042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.609057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.609224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.609239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.609342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.609357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.609523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.609538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.609611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.609667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.609958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.610002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.610136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.610186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.610356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.610371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.610574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.610616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.610809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.610852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.611014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.611056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.611204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.611220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.611381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.611432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.611653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.611696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.611894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.611936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.612198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.612244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.612405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.612447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.612738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.612781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.612968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.612983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.613129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.613143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.613360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.613405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.613561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.613603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.613820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.613863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.614114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.614129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.614352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.614367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.614610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.614652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.614945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.614988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.615272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.615288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.615388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.615403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.615544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.615584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.615782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.615825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.616097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.616140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.616314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.616329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.616482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.616525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.616738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.616783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.616984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.617027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.617162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.617219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.068 [2024-12-10 12:41:32.617410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.068 [2024-12-10 12:41:32.617453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.068 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.617596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.617638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.617840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.617854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.618088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.618136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.618369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.618413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.618620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.618663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.618867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.618911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.619129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.619184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.619279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.619293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.619505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.619547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.619749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.619792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.619996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.620039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.620254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.620298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.620449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.620493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.620731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.620775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.620956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.620973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.621125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.621139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.621213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.621228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.621293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.621308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.621463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.621478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.621648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.621662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.621902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.621945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.622188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.622232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.622440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.622483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.622794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.622838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.623141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.623228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.623449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.623493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.623727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.623769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.623972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.623987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.624182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.624228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.624493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.624536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.624737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.624780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.624995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.625038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.625252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.625297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.069 [2024-12-10 12:41:32.625504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.069 [2024-12-10 12:41:32.625547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.069 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.625711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.625754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.625975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.625989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.626178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.626223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.626435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.626478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.626681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.626723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.627003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.627046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.627266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.627282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.627516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.627560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.627760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.627803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.628012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.628054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.628171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.628187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.628277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.628292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.628431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.628445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.628666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.628708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.628978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.629021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.629153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.629173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.629333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.629348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.629559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.629575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.629713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.629728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.629811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.629837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.630050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.630101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.630322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.630368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.630578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.630620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.630756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.630800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.631133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.631193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.631418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.631432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.631583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.631598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.631815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.631829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.631972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.632014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.632273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.632320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.632583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.632626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.632868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.632913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.633044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.633087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.633295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.633310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.633552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.633596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.633756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.633799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.634029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.634071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.070 [2024-12-10 12:41:32.634217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.070 [2024-12-10 12:41:32.634262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.070 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.634484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.634527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.634719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.634762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.634912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.634956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.635259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.635302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.635510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.635554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.635783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.635825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.636039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.636082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.636305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.636350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.636581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.636625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.636979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.637067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.637388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.637437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.637626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.637674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.637849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.637866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.638026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.638069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.638216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.638261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.638414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.638456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.638740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.638783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.638979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.639022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.639304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.639319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.639456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.639475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.639644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.639687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.639840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.639882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.640098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.640161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.640378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.640393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.640617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.640632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.640714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.640766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.640966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.641009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.641212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.641258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.641460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.641504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.641815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.641858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.642052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.642095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.642334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.642379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.642541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.642584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.642867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.642911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.643202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.643246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.643471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.643514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.643661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.643704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.643991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.644033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.071 qpair failed and we were unable to recover it. 00:38:26.071 [2024-12-10 12:41:32.644187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.071 [2024-12-10 12:41:32.644232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.644461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.644507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.644796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.644843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.645008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.645023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.645189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.645234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.645364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.645407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.645613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.645656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.645879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.645894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.646058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.646100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.646272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.646330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.646601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.646643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.646801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.646844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.647069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.647111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.647283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.647299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.647445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.647460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.647611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.647626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.647696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.647711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.647811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.647825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.648034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.648078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.648275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.648320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.648557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.648600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.648744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.648787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.649017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.649060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.649257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.649272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.649507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.649557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.649709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.649945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.649988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.650207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.650233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.650388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.650415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.650512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.650527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.650680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.650695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.650853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.650896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.651200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.651244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.651392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.651435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.651646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.651689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.651953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.651968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.652108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.652122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.652359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.652405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.072 [2024-12-10 12:41:32.652542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.072 [2024-12-10 12:41:32.652586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.072 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.652740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.652783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.652980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.653022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.653229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.653245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.653397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.653441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.653650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.653693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.653971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.654015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.654232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.654252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.654341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.654356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.654460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.654475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.654563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.654577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.654718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.654733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.654954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.654969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.655124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.655139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.655349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.655393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.655589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.655632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.655894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.655936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.656054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.656069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.656253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.656299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.656564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.656607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.656752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.656790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.656892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.656907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.657098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.657141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.657347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.657391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.657523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.657566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.657714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.657757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.657999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.658048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.658332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.658349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.658508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.658548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.658789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.658832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.658961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.659004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.659220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.659264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.659461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.659504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.659728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.659780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.659877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.073 [2024-12-10 12:41:32.659891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.073 qpair failed and we were unable to recover it. 00:38:26.073 [2024-12-10 12:41:32.659963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.659977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.660122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.660165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.660410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.660454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.660713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.660756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.661028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.661071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.661359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.661375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.661537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.661579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.661734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.661777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.662040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.662084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.662297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.662342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.662500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.662544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.662758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.662802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.663109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.663152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.663475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.663490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.663624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.663639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.663741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.663785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.663999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.664042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.664261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.664306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.664595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.664685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.665001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.665051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.665215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.665264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.665418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.665464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.665753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.665820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.665972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.665995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.666209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.666255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.666573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.666617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.666830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.666874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.667157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.667186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.667354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.667376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.667485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.667508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.667683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.667700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.667850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.667900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.668100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.668143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.668425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.668468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.668675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.668719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.668879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.668919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.669147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.669162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.669267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.074 [2024-12-10 12:41:32.669283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.074 qpair failed and we were unable to recover it. 00:38:26.074 [2024-12-10 12:41:32.669350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.669365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.669455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.669497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.669778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.669822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.670037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.670080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.670241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.670285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.670435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.670477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.670689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.670733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.670871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.670914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.671189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.671234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.671553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.671598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.671821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.671877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.672100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.672144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.672372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.672387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.672625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.672668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.672824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.672866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.673081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.673124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.673342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.673386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.673619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.673662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.673814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.673855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.674069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.674113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.674391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.674450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.674672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.674720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.675005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.675032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.675202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.675219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.675320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.675335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.675511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.675526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.675667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.675682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.675762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.675778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.675923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.675938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.676088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.676104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.676261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.676287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.676432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.676446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.676583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.676598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.676683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.676701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.676778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.676793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.676902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.676917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.677065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.075 [2024-12-10 12:41:32.677107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.075 qpair failed and we were unable to recover it. 00:38:26.075 [2024-12-10 12:41:32.677318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.677363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.677580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.677623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.677926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.677969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.678111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.678157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.678320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.678335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.678420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.678472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.678701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.678743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.678946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.678990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.679206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.679222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.679376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.679419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.679706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.679750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.679947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.679991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.680111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.680126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.680271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.680286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.680487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.680501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.680665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.680680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.680838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.680881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.681100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.681143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.681395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.681439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.681643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.681687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.681908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.681950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.682215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.682261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.682540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.682555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.682708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.682722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.682929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.682972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.683139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.683196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.683394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.683436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.683727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.683770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.683970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.684013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.684215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.684260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.684454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.684468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.684625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.684668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.684828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.684871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.685061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.685104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.685268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.685283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.685463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.685505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.685663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.685706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.686024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.076 [2024-12-10 12:41:32.686067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.076 qpair failed and we were unable to recover it. 00:38:26.076 [2024-12-10 12:41:32.686330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.686375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.686538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.686581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.686793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.686854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.687015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.687059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.687245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.687261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.687399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.687413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.687491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.687505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.687596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.687611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.687762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.687805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.687952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.687995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.688133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.688187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.688431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.688446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.688594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.688609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.688760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.688802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.689054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.689098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.689316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.689331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.689465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.689480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.689690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.689705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.689800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.689815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.689916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.689931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.690030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.690086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.690351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.690395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.690616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.690659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.690873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.690915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.691110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.691152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.691387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.691421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.691652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.691667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.691849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.691863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.691944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.691960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.692116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.692131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.692204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.692219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.692367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.692410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.692675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.692718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.692985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.693026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.693243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.693281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.693427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.693441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.693600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.693615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.693702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.693717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.077 [2024-12-10 12:41:32.693944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.077 [2024-12-10 12:41:32.693958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.077 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.694180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.694196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.694352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.694367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.694537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.694580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.694832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.694875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.695156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.695221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.695453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.695496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.695695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.695738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.696045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.696087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.696239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.696284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.696567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.696605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.696826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.696869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.697090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.697104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.697331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.697376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.697624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.697668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.697825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.697869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.698044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.698087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.698360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.698404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.698569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.698613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.698855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.698898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.699135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.699150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.699237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.699252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.699501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.699515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.699672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.699686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.699819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.699834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.699970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.699984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.700065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.700080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.700307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.700360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.700583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.700638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.700802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.700847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.701053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.701096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.701313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.701357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.701655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.701698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.701930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.701973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.702158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.078 [2024-12-10 12:41:32.702178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.078 qpair failed and we were unable to recover it. 00:38:26.078 [2024-12-10 12:41:32.702355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.702398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.702606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.702649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.702877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.702919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.703046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.703089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.703361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.703406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.703674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.703714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.703869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.703913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.704123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.704179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.704398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.704413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.704511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.704526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.704661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.704675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.704762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.704777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.704858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.704872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.705028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.705072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.705218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.705263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.705496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.705538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.705693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.705737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.705959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.706002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.706210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.706256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.706474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.706517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.706729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.706774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.707062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.707105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.707250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.707295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.707507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.707549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.707783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.707826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.708083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.708098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.708299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.708314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.708413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.708428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.708582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.708597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.708685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.708700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.708776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.708791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.708942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.708991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.709252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.709304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.709460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.709504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.709662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.709706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.709965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.710008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.710134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.710227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.710412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.079 [2024-12-10 12:41:32.710427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.079 qpair failed and we were unable to recover it. 00:38:26.079 [2024-12-10 12:41:32.710585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.710600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.710744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.710759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.710982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.710997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.711226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.711241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.711335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.711350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.711485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.711501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.711680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.711723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.711888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.711931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.712151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.712197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.712288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.712302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.712547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.712561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.712709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.712723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.712798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.712813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.712987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.713028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.713293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.713338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.713536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.713579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.713849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.713893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.714057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.714100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.714330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.714345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.714423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.714443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.714534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.714549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.714788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.714831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.715036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.715079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.715235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.715288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.715433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.715448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.715550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.715565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.715774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.715789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.715884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.715899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.716135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.716186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.716456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.716500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.716706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.716750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.716891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.716933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.717176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.717191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.717282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.717297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.080 [2024-12-10 12:41:32.717431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.080 [2024-12-10 12:41:32.717449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.080 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.717592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.717608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.717780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.717813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.717964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.718008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.718146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.718220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.718364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.718419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.718666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.718711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.718874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.718917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.719046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.719089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.719292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.719338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.719545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.719589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.719784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.719826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.720067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.720110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.720389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.720405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.720559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.720574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.720792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.720807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.720920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.720963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.721179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.721194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.721383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.721427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.721639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.721683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.721846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.721888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.722103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.722146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.722421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.722459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.722618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.722633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.722816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.722859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.722997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.723040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.723202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.723248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.723355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.723371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.723499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.723513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.723602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.723616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.723785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.723826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.724091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.724136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.724294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.724338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.724575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.724590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.724755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.724798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.724981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.725004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.725093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.725108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.081 qpair failed and we were unable to recover it. 00:38:26.081 [2024-12-10 12:41:32.725244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.081 [2024-12-10 12:41:32.725260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.725441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.725484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.725822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.725866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.726118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.726134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.726269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.726285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.726413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.726428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.726550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.726592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.726852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.726894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.727047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.727091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.727325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.727369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.727647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.727662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.727843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.727859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.728043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.728099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.728319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.728334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.728491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.728534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.728816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.728859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.729153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.729208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.729480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.729523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.729815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.729858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.730074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.730117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.730384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.730399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.730551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.730565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.730712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.730726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.730934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.730976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.731114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.731156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.731371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.731422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.731495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.731510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.731725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.731767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.731918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.731961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.732218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.732263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.732371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.732386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.732467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.732481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.732656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.732671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.732908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.732951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.733156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.733209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.733408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.733422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.733497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.733512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.733739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.733754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.082 qpair failed and we were unable to recover it. 00:38:26.082 [2024-12-10 12:41:32.733900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.082 [2024-12-10 12:41:32.733915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.734074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.734117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.734268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.734312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.734598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.734640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.734784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.734827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.735047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.735097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.735420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.735460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.735745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.735789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.736088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.736131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.736391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.736435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.736671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.736714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.736855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.736898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.737110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.737153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.737307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.737359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.737558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.737601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.737748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.737792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.737999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.738042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.738326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.738342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.738409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.738424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.738552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.738567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.738759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.738774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.738928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.738971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.739201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.739245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.739388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.739431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.739614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.739629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.739795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.739838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.740099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.740143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.740352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.740367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.740447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.740462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.740619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.740633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.740816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.740859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.741088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.741130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.741308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.741353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.083 [2024-12-10 12:41:32.741551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.083 [2024-12-10 12:41:32.741594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.083 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.741741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.741784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.741930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.741973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.742119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.742162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.742391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.742406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.742481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.742500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.742703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.742717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.742783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.742797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.742880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.742895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.742969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.742983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.743139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.743193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.743409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.743451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.743604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.743653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.743868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.743911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.744192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.744238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.744462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.744505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.744712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.744753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.745013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.745055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.745158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.745179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.745352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.745406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.745614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.745658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.745939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.745982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.746255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.746300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.746513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.746557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.746698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.746741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.746979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.747034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.747178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.747209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.747319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.747335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.747556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.747599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.747862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.747904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.748046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.748089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.748320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.748365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.748494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.748535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.748696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.748711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.748932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.748947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.749106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.749149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.749309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.749352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.749562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.084 [2024-12-10 12:41:32.749605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.084 qpair failed and we were unable to recover it. 00:38:26.084 [2024-12-10 12:41:32.749809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.749852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.750117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.750161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.750442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.750456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.750600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.750615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.750703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.750717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.750949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.750964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.751172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.751187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.751348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.751364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.751500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.751527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.751683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.751726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.751926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.751967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.752114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.752129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.752264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.752279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.752425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.752439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.752593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.752643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.752785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.752828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.753045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.753088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.753345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.753360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.753613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.753655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.753848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.753891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.754104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.754146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.754325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.754339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.754493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.754507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.754673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.754716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.754923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.754966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.755094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.755137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.755375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.755554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.755597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.755817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.755861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.756125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.756179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.756401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.756444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.756707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.756762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.756974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.757018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.757146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.757201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.757454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.757469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.757678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.757721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.085 [2024-12-10 12:41:32.757863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.085 [2024-12-10 12:41:32.757906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.085 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.758135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.758187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.758374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.758389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.758620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.758663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.758932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.758975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.759160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.759225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.759425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.759473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.759702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.759749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.759915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.759932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.760004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.760019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.760175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.760191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.760308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.760351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.760553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.760595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.760883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.760926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.761190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.761235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.761528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.761570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.761765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.761808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.762009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.762052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.762267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.762324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.762486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.762529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.762793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.762836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.762982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.763024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.763161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.763218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.763445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.763488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.763748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.763763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.763908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.763923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.764125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.764140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.764235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.764250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.764353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.764368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.764580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.764622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.764895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.764937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.765091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.765133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.765286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.765301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.765485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.765528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.765739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.765782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.766043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.766087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.766387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.766429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.766634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.766652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.766861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.086 [2024-12-10 12:41:32.766905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.086 qpair failed and we were unable to recover it. 00:38:26.086 [2024-12-10 12:41:32.767061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.767104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.767319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.767367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.767570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.767585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.767852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.767895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.768203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.768247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.768469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.768514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.768784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.768873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.769165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.769228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.769496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.769542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.769813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.769875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.770083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.770129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.770383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.770415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.770585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.770632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.770895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.770939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.771132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.771184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.771424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.771467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.771680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.771723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.771861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.771905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.772121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.772163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.772370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.772388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.772540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.772582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.772710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.772752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.773039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.773082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.773335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.773350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.773528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.773543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.773693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.773708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.773926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.773968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.774203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.774248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.774453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.774501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.774713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.774768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.774905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.774947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.775250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.775295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.775586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.775629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.775847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.087 [2024-12-10 12:41:32.775889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.087 qpair failed and we were unable to recover it. 00:38:26.087 [2024-12-10 12:41:32.776031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.776074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.776337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.776380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.776610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.776653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.776852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.776896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.777109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.777150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.777450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.777494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.777800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.777843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.777993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.778035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.778264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.778309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.778488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.778503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.778681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.778723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.778942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.778983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.779223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.779269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.779411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.779453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.779738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.779780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.780121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.780164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.780477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.780520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.780711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.780753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.780901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.780944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.781069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.781111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.781325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.781370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.781546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.781561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.781707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.781722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.781808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.781823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.782055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.782097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.782317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.782371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.782587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.782601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.782679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.782694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.782783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.782798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.783005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.783048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.783331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.783376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.783579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.783621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.783828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.783871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.784095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.784138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.784477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.784491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.784725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.784740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.784886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.784900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.785066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.088 [2024-12-10 12:41:32.785111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.088 qpair failed and we were unable to recover it. 00:38:26.088 [2024-12-10 12:41:32.785269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.785314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.785530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.785575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.785816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.785830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.785998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.786012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.786152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.786205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.786337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.786379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.786590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.786632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.786872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.786914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.787117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.787161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.787416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.787465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.787755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.787803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.788101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.788150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.788282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.788299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.788444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.788459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.788523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.788537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.788764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.788778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.788941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.788984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.789192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.789237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.789472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.789514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.789723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.789765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.790025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.790068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.790338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.790383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.790647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.790690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.790927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.790972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.791179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.791234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.791521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.791564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.791702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.791745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.791958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.792012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.792295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.792339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.792568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.792611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.792898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.792941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.793234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.793279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.793566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.793609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.793854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.793897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.794197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.794242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.794476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.794519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.794654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.794668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.794769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.794784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.089 [2024-12-10 12:41:32.794952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.089 [2024-12-10 12:41:32.794967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.089 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.795122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.795190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.795436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.795480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.795705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.795720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.795870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.795911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.796061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.796104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.796412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.796457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.796654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.796696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.796896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.796939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.797228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.797274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.797416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.797458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.797632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.797676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.797963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.798006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.798236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.798280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.798496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.798539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.798660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.798704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.799013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.799069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.799287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.799348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.799541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.799598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.799824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.799849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.799959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.800004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.800270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.800317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.800610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.800653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.800890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.800935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.801179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.801225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.801461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.801503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.801772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.801815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.802037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.802082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.802380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.802478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.802776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.802820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.803042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.803085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.803357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.803379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.803482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.803504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.803696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.803739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.804019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.804061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.804273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.804317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.804535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.804578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.804727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.804770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.805004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.090 [2024-12-10 12:41:32.805047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.090 qpair failed and we were unable to recover it. 00:38:26.090 [2024-12-10 12:41:32.805202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.805245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.805505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.805549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.805764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.805785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.805939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.805981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.806300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.806346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.806483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.806525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.806687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.806729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.806962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.807004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.807219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.807264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.807571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.807595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.807782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.807804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.808067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.808119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.808293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.808337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.808488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.808531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.808752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.808774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.809021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.809065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.809287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.809333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.809564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.809619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.809816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.809859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.810006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.810050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.810217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.810264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.810533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.810576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.810826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.810848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.810956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.810979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.811154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.811210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.811487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.811531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.811672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.811714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.811919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.811962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.812199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.812244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.812380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.812403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.812585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.812630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.812938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.812982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.813221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.813267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.813553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.813601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.813849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.813871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.814044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.814066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.814299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.814323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.814426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.814449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.814552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.091 [2024-12-10 12:41:32.814574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.091 qpair failed and we were unable to recover it. 00:38:26.091 [2024-12-10 12:41:32.814744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.814767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.814938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.814960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.815179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.815202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.815442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.815465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.815666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.815689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.815847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.815869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.816025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.816070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.816226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.816271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.816469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.816512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.816728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.816751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.816989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.817011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.817105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.817127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.817373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.817397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.817506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.817529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.817633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.817663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.817830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.817875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.818191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.818237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.818433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.818456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.818622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.818673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.818893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.818937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.819219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.819264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.819415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.819458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.819752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.819796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.819936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.819980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.820136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.820190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.820407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.820451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.820593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.820635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.820911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.820934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.821180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.821231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.821350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.821367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.821505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.821543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.821768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.821812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.821969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.822013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.822214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.092 [2024-12-10 12:41:32.822260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.092 qpair failed and we were unable to recover it. 00:38:26.092 [2024-12-10 12:41:32.822471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.822514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.822804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.822848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.823061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.823115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.823401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.823454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.823608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.823623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.823698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.823736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.823987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.824031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.824302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.824347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.824571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.824614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.824820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.824863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.825074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.825117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.825333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.825378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.825564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.825580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.825756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.825798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.826086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.826129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.826290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.826335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.826550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.826593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.826797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.826840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.827127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.827183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.827350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.827365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.827588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.827603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.827687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.827702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.827857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.827871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.828103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.828146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.828361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.828411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.828658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.828701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.828906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.828950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.829232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.829278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.829442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.829462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.829634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.829677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.829892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.829935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.830195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.830239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.830502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.830545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.830746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.830760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.830915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.830958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.831245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.831290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.831434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.831476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.831615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.831630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.093 [2024-12-10 12:41:32.831728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.093 [2024-12-10 12:41:32.831743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.093 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.831835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.831849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.832057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.832099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.832372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.832416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.832619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.832634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.832794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.832838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.833073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.833116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.833276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.833331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.833427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.833442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.833588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.833603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.833838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.833853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.834010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.834025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.834224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.834240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.834380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.834394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.834603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.834618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.834709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.834724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.834815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.834830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.834993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.835007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.835148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.835204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.835474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.835518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.835675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.835718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.835941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.835983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.836198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.836241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.836385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.836399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.836532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.836546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.836614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.836629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.836717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.836766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.837029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.837072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.837279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.837325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.837520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.837562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.837703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.837746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.837898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.837941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.838263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.838279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.838365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.838380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.838606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.838621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.838802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.838846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.838973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.839017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.839210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.839264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.839499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.094 [2024-12-10 12:41:32.839514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.094 qpair failed and we were unable to recover it. 00:38:26.094 [2024-12-10 12:41:32.839704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.839748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.839955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.840000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.840275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.840320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.840617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.840632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.840724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.840739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.840823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.840837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.841072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.841115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.841337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.841382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.841579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.841622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.841882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.841925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.842134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.842205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.842440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.842484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.842685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.842699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.842831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.842846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.843023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.843041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.843202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.843260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.843408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.843452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.843594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.843637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.843808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.843823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.844055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.844098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.844397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.844442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.844652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.844667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.844842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.844885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.845088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.845131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.845359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.845404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.845622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.845664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.845928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.845970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.846199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.846251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.846464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.846507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.846819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.846862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.847029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.847071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.847304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.847349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.847505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.847548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.847720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.847734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.847882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.847896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.848070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.848114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.848343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.848385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.848649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.848692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.095 qpair failed and we were unable to recover it. 00:38:26.095 [2024-12-10 12:41:32.848895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.095 [2024-12-10 12:41:32.848936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.849087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.849130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.849411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.849455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.849661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.849704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.849914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.849928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.850007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.850022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.850154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.850175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.850335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.850377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.850537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.850580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.850812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.850854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.851017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.851060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.851261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.851307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.851544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.851587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.851727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.851770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.851974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.852016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.852298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.852350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.852502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.852546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.852743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.852787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.853044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.853059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.853147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.853162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.853384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.853400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.853561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.853575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.853727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.853742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.853844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.853859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.854014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.854028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.854118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.854161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.854391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.854433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.854692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.854734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.854881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.854924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.855150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.855213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.855350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.855392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.855597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.855640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.855789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.855804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.855883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.855898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.096 qpair failed and we were unable to recover it. 00:38:26.096 [2024-12-10 12:41:32.856055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.096 [2024-12-10 12:41:32.856098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.856330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.856374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.856571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.856614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.856790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.856805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.856902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.856917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.857136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.857150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.857385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.857402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.857501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.857521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.857603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.857618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.857849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.857864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.858068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.858083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.858182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.858197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.858338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.858353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.858442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.858456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.858535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.858549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.858635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.858649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.858786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.858828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.859039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.859080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.859340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.859384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.859555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.859570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.859718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.859761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.860026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.860070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.860234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.860279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.860420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.860462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.860613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.860656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.860764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.860778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.860856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.860870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.861015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.861033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.861135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.861149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.861230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.861245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.861409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.861423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.861594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.861609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.861699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.861714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.861799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.861814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.861912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.861927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.862080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.862098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.862186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.862201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.862289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.862304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.862483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.097 [2024-12-10 12:41:32.862527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.097 qpair failed and we were unable to recover it. 00:38:26.097 [2024-12-10 12:41:32.862811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.862851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.863072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.863115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.863286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.863330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.863657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.863700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.863839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.863854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.864000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.864015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.864159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.864184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.864367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.864409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.864668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.864710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.865007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.865051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.865219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.865262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.098 [2024-12-10 12:41:32.865496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.098 [2024-12-10 12:41:32.865539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.098 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.865797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.865813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.865960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.865975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.866134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.866150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.866292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.866309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.866449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.866464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.866559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.866574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.866650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.866665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.866763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.866778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.866931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.866946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.867014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.867028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.867095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.867110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.867249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.867297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.867588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.867635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.867829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.867855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.867945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.868896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.868916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.869065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.869082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.869292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.869308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.869394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.869409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.869489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-10 12:41:32.869504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-10 12:41:32.869642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.869657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.869722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.869737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.869819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.869834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.870067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.870082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.870223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.870239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.870355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.870370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.870441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.870455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.870646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.870661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.870793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.870809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.870960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.870975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.871050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.871065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.871163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.871235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.871391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.871434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.871703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.871748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.871917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.871932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.872083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.872099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.872230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.872246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.872394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.872437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.872717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.872760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.872902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.872944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.873143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.873195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.873480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.873523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.873719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.873762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.874023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.874082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.874325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.874375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.874669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.874714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.874892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.874915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.875111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.875134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.875314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.875338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.875558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.875575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.875725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.875740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.875939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.875954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.876024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.876039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.876178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.876194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.876345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.876360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.876574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.876617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.876758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.876809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.877015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.877057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-10 12:41:32.877352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-10 12:41:32.877400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.877650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.877672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.877906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.877929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.878015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.878032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.878248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.878264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.878409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.878435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.878585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.878600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.878759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.878774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.878877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.878918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.879152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.879211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.879365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.879409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.879686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.879709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.879877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.879902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.880161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.880221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.880383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.880427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.880633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.880677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.880973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.881018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.881215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.881262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.881483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.881526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.881746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.881790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.882085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.882129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.882287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.882332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.882554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.882607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.882721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.882744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.882841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.882864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.883053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.883101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.883378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.883427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.883532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.883549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.883742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.883785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.884078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.884120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.884395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.884441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.884709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.884752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.884941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.884984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.885125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.885179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.885471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.885515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.885663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.885705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.885900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.885943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.886150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.886214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-10 12:41:32.886419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-10 12:41:32.886470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.886767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.886810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.886962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.887005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.887298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.887344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.887558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.887601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.887758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.887801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.887968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.887983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.888135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.888185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.888421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.888465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.888773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.888818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.888950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.889005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.889293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.889337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.889611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.889626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.889766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.889781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.889945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.889960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.890182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.890227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.890454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.890497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.890775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.890817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.891029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.891072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.891357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.891401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.891630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.891644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.891727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.891742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.892000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.892043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.892253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.892298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.892491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.892535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.892685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.892728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.892952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.892995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.893298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.893388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.893749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.893838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.894085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.894136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.894446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.894492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.894784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.894830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.895118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.895162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.895354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.895400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.895685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.895708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.895824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.895846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.896015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-10 12:41:32.896064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-10 12:41:32.896252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.896309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.896593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.896641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.896838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.896882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.897102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.897155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.897387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.897433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.897702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.897745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.898009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.898053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.898189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.898235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.898507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.898550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.898757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.898801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.899031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.899053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.899163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.899192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.899279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.899301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.899461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.899504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.899694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.899737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.899957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.899999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.900197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.900241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.900459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.900502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.900642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.900684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.900927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.900972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.901200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.901245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.901513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.901557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.901795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.901817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.901991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.902014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.902138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.902194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.902442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.902484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.902667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.902712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.902878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.902893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.903078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.903121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.903431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.903475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.903791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.903845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.904047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.904089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.904319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.904364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-10 12:41:32.904594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-10 12:41:32.904616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.904733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.904777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.904932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.904973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.905260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.905305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.905440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.905483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.905771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.905815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.906014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.906036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.906148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.906200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.906359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.906409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.906691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.906735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.907003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.907029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.907123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.907145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.907420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.907466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.907701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.907745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.908008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.908051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.908292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.908337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.908576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.908621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.908825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.908867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.909089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.909291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.909314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.909435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.909477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.909629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.909685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.909880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.909923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.910115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.910137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.910292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.910310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.910399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.910425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.910574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.910589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.910752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.910794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.911063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.911106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.911271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.911315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.911513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.911556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.911822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.911875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.912038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.912053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.912142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.912157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.912321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.912336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.912483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.912526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.912732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.912775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.913013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.913101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.913358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.913407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-10 12:41:32.913611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-10 12:41:32.913635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.913874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.913921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.914078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.914121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.914346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.914391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.914645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.914690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.914976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.915019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.915157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.915211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.915432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.915476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.915651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.915666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.915912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.915955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.916204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.916251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.916484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.916549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.916748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.916772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.916895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.916940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.917149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.917208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.917497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.917542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.917752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.917796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.917934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.917978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.918194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.918239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.918453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.918498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.918641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.918684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.918903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.918926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.919163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.919183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.919347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.919362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.919516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.919531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.919690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.919705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.919783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.919798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.919947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.919983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.920201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.920245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.920459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.920501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.920760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.920775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.920881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.920895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.921055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.921098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.921305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.921349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.921479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.921520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.921771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.921796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.922023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.922038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.388 [2024-12-10 12:41:32.922181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.388 [2024-12-10 12:41:32.922196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.388 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.922336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.922385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.922703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.922750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.922951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.922970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.923190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.923235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.923436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.923477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.923764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.923806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.924117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.924158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.924419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.924462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.924715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.924729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.924968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.924983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.925148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.925162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.925297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.925312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.925456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.925497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.925691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.925732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.925966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.926014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.926230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.926275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.926424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.926467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.926671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.926721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.926936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.926959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.927209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.927255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.927479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.927521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.927736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.927780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.927965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.927988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.928190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.928235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.928445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.928489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.928794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.928838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.929048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.929092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.929387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.929432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.929591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.929634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.929900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.929944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.930148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.930177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.930354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.930396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.930602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.930646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.930801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.930844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.930954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.930977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.931139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.931156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.931303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.931318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.931459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.931475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.389 qpair failed and we were unable to recover it. 00:38:26.389 [2024-12-10 12:41:32.931648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.389 [2024-12-10 12:41:32.931690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.931835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.931878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.932040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.932091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.932339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.932385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.932650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.932709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.932998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.933043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.933258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.933302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.933460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.933503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.933790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.933833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.934057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.934100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.934248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.934293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.934531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.934575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.934704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.934748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.934895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.934917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.935093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.935136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.935411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.935456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.935770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.935813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.936013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.936057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.936202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.936247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.936407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.936450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.936712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.936755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.937027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.937071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.937218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.937263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.937550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.937593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.937805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.937849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.938092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.938136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.938357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.938401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.938616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.938660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.938945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.938967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.939202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.939226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.939444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.939462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.939608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.939623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.939707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.939735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.939943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.939985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.940216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.940261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.940553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.940596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.940809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.940863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.941089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.941104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.390 [2024-12-10 12:41:32.941304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.390 [2024-12-10 12:41:32.941320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.390 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.941495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.941510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.941667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.941710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.942002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.942045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.942247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.942299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.942533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.942576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.942777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.942791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.942950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.942965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.943094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.943109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.943259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.943274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.943481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.943513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.943735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.943778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.944087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.944131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.944306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.944353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.944549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.944592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.944742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.944765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.944937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.944982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.945200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.945243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.945501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.945546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.945684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.945701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.945769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.945783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.945952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.945996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.946209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.946253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.946465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.946508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.946734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.946748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.946928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.946969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.947109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.947152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.947300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.947349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.947566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.947609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.947899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.947941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.948040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.948055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.948212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.948274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.948480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.948522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.948718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.948775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.948999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.949014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.391 qpair failed and we were unable to recover it. 00:38:26.391 [2024-12-10 12:41:32.949144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.391 [2024-12-10 12:41:32.949158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.949304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.949353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.949561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.949604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.949898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.949940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.950226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.950270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.950414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.950457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.950598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.950613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.950779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.950821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.951048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.951093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.951319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.951381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.951534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.951578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.951777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.951818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.952115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.952130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.952214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.952230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.952466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.952510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.952658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.952697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.952860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.952874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.952989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.953032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.953229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.953271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.953429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.953472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.953667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.953682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.953866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.953909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.954127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.954179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.954385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.954427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.954630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.954672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.954874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.954915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.955047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.955062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.955200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.955241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.955440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.955484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.955685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.955727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.955925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.955939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.956076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.956090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.956232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.956279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.956538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.956580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.956827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.956870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.957067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.957109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.957322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.957365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.957563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.957608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.392 [2024-12-10 12:41:32.957867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.392 [2024-12-10 12:41:32.957907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.392 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.958049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.958210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.958243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.958453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.958468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.958674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.958690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.958914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.958928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.959126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.959140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.959227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.959242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.959477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.959521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.959832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.959873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.960070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.960084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.960332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.960384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.960545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.960588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.960734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.960775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.961044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.961090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.961227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.961273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.961492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.961535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.961696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.961740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.961990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.962005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.962232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.962248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.962482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.962524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.962660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.962703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.962843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.962887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.963043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.963058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.963299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.963342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.963504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.963547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.963674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.963717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.963902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.963916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.964054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.964069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.964153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.964177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.964352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.964395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.964610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.964652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.964772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.964814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.964960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.964974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.965144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.965202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.965454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.965511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.965670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.965712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.965840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.965854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.965991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.966006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.393 qpair failed and we were unable to recover it. 00:38:26.393 [2024-12-10 12:41:32.966152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.393 [2024-12-10 12:41:32.966207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.966417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.966460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.966593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.966637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.966860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.966903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.967020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.967062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.967199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.967244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.967447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.967490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.967773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.967816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.968097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.968138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.968307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.968349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.968557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.968600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.968735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.968777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.968920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.968969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.969175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.969191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.969380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.969423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.969564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.969607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.969822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.969866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.969962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.969977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.970214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.970229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.970397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.970412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.970548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.970563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.970770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.970814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.971010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.971054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.971204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.971248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.971451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.971494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.971712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.971756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.971983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.971998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.972210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.972254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.972518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.972563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.972706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.972749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.972960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.972975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.973137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.973191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.973323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.973367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.973678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.973722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.974012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.974038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.974188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.974203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.974390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.974405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.974556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.394 [2024-12-10 12:41:32.974600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.394 qpair failed and we were unable to recover it. 00:38:26.394 [2024-12-10 12:41:32.974845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.974887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.975045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.975092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.975181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.975197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.975408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.975452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.975645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.975689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.975832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.975874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.976004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.976018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.976218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.976234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.976311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.976326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.976415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.976430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.976569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.976584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.976794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.976838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.977038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.977081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.977229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.977274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.977400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.977451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.977736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.977780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.977988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.978032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.978157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.978211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.978504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.978548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.978807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.978850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.979045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.979088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.979314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.979360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.979576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.979631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.979847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.979891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.980077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.980092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.980251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.980300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.980537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.980581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.980780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.980823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.981040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.981084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.981303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.981348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.981563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.981607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.981818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.981861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.982114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.982129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.982288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.982333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.982486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.395 [2024-12-10 12:41:32.982529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.395 qpair failed and we were unable to recover it. 00:38:26.395 [2024-12-10 12:41:32.982834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.982849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.982928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.982943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.983033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.983047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.983228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.983273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.983478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.983522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.983747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.983790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.984081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.984137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.984339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.984365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.984532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.984579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.984868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.984913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.985176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.985199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.985369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.985413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.985707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.985751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.986014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.986038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.986192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.986218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.986416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.986460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.986737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.986782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.987053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.987075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.987292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.987315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.987426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.987454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.987657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.987701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.987930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.987973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.988127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.988178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.988391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.988434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.988602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.988646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.988853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.988876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.989040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.989084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.989323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.989368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.989584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.989628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.989917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.989962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.990158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.990212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.990354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.990397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.990688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.990735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.990914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.990932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.991161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.991215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.991444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.991487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.991754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.991797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.992002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.396 [2024-12-10 12:41:32.992044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.396 qpair failed and we were unable to recover it. 00:38:26.396 [2024-12-10 12:41:32.992324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.992369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.992513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.992556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.992765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.992808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.993014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.993057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.993263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.993308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.993441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.993485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.993692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.993735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.993935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.993978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.994326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.994414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.994734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.994822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.995031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.995056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.995257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.995303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.995567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.995610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.995818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.995857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.996079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.996093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.996309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.996325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.996525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.996541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.996695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.996739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.996894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.996938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.997089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.997132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.997430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.997478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.997746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.997797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.998018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.998062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.998226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.998249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.998429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.998473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.998693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.998737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.999034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.999078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.999245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.999301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.999521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.999566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.999737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.999783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:32.999949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:32.999993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:33.000205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:33.000252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:33.000459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:33.000504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3902858 Killed "${NVMF_APP[@]}" "$@" 00:38:26.397 [2024-12-10 12:41:33.000727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:33.000774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:33.000993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:33.001021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:33.001133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:33.001151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 [2024-12-10 12:41:33.001243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:33.001268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.397 qpair failed and we were unable to recover it. 00:38:26.397 12:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:26.397 [2024-12-10 12:41:33.001518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.397 [2024-12-10 12:41:33.001543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.001701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.001724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:26.398 [2024-12-10 12:41:33.001903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.001949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.002083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:26.398 [2024-12-10 12:41:33.002127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.002348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.002395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:26.398 [2024-12-10 12:41:33.002542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.002589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.398 [2024-12-10 12:41:33.002809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.002853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.003059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.003102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.003275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.003329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.003482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.003527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.003653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.003695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.003832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.003847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.004084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.004129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.004823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.004880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.005157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.005187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.005395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.005418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.005579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.005596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.005694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.005710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.005798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.005815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.006038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.006053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.006134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.006149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.006246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.006262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.006402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.006417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.006566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.006581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.006729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.006744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.006841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.006857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.006958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.006974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.007062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.007077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.007161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.007182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.007301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.007317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.007379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.007399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.007483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.007498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.007569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.007584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.007666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.007681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 [2024-12-10 12:41:33.007753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.398 [2024-12-10 12:41:33.007767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.398 qpair failed and we were unable to recover it. 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3903565 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3903565 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3903565 ']' 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.398 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.399 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:26.399 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.399 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.399 [2024-12-10 12:41:33.010587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.010632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.010963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.011050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.011358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.011405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.011630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.011674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.011812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.011855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.012093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.012137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.012366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.012410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.012643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.012684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.012899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.012942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.013109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.013156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.013374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.013416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.013624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.013669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.013859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.013874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.014036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.014078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.014239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.014294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.014607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.014654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.014878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.014932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.015098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.015152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.015337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.015360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.015515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.015558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.015804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.015847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.015999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.016041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.016293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.016317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.016551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.016567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.016716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.016730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.016889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.016931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.017198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.017241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.017452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.017494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.017635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.017677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.017827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.017869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.018058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.018071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.018296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.018309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.018392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.018405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.018634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.018678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.018818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.018861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.019019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.019074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.019391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.019420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.399 [2024-12-10 12:41:33.019526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.399 [2024-12-10 12:41:33.019549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.399 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.019722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.019768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.019987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.020031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.020241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.020285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.020478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.020518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.020726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.020768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.021007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.021049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.021258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.021272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.021358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.021372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.021507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.021521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.021610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.021624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.021784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.021826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.021974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.022018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.022158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.022213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.022359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.022404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.022635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.022684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.022905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.022950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.023151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.023179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.023265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.023313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.023474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.023515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.023705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.023745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.023888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.023910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.024012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.024033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.024109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.024124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.024282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.024296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.024453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.024477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.024652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.024695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.024966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.025010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.025265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.025288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.025447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.025470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.025644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.025666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.025760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.025776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.025946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.025988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.400 [2024-12-10 12:41:33.026131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.400 [2024-12-10 12:41:33.026183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.400 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.026383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.026426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.026623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.026664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.026839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.026883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.027031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.027079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.027245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.027262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.027442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.027456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.027589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.027603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.027672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.027686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.027889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.027904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.028048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.028062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.028164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.028219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.028449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.028491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.028639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.028680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.028807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.028849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.029003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.029044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.029237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.029281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.029550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.029592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.029736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.029777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.029938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.029981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.030233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.030248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.030390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.030403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.030582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.030596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.030751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.030793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.031030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.031073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.031287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.031312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.031506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.031550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.031708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.031752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.031914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.031957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.032092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.032113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.032326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.032370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.032589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.032634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.032792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.032841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.033090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.033111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.033290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.033313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.033415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.033462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.033659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.033701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.033985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.034020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.034175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.034189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.401 qpair failed and we were unable to recover it. 00:38:26.401 [2024-12-10 12:41:33.034425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.401 [2024-12-10 12:41:33.034438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.034577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.034618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.034811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.034852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.035049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.035099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.035237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.035257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.035425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.035465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.035703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.035752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.035964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.036005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.036296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.036339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.036556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.036600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.036760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.036774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.037013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.037054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.037260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.037303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.037516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.037558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.037839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.037881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.038156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.038208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.038357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.038400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.038591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.038634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.038830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.038843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.039014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.039054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.039257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.039301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.039437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.039479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.039748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.039792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.039920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.039962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.040178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.040222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.040440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.040483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.040726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.040768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.040945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.040959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.041113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.041126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.041393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.041436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.041630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.041671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.041986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.042029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.042236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.042278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.042441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.042455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.042529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.042542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.042678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.042692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.042755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.042779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.042933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.042947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.043042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.043056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.402 [2024-12-10 12:41:33.043205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.402 [2024-12-10 12:41:33.043251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.402 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.043388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.043429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.043595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.043637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.043833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.043874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.044012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.044054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.044196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.044210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.044309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.044325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.044492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.044508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.044639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.044652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.044734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.044747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.044950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.044964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.045189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.045232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.045439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.045481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.045621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.045662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.045824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.045837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.045976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.046147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.046201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.046394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.046436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.046572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.046613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.046782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.046824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.047177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.047233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.047491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.047534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.047686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.047727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.047930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.047971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.048115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.048157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.048443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.048486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.048744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.048785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.049035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.049047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.049154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.049183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.049443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.049457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.049550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.049563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.049631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.049644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.049797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.049811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.049955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.049968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.050082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.050128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.050368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.050412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.050537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.050584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.050761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.050778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.050918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.050932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.050999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.403 [2024-12-10 12:41:33.051011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.403 qpair failed and we were unable to recover it. 00:38:26.403 [2024-12-10 12:41:33.051178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.051192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.051341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.051354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.051436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.051450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.051525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.051537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.051695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.051708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.051794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.051818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.051902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.051915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.052903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.052916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.053944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.053957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.054091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.054105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.054276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.054291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.054481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.054494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.054641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.054654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.054735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.054747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.054844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.054857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.054956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.054968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.055177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.055191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.055271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.404 [2024-12-10 12:41:33.055284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.404 qpair failed and we were unable to recover it. 00:38:26.404 [2024-12-10 12:41:33.055382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.055410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.055582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.055655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.055769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.055796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.055957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.055979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.056074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.056095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.056184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.056207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.056317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.056339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.056419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.056447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.056564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.056586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.056844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.056865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.057032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.057053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.057223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.057247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.057451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.057472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.057628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.057648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.057740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.057762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.057983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.058013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.058236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.058257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.058420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.058441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.058531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.058552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.058741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.058762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.058867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.058888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.058982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.059001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.059184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.059198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.059347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.059360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.059466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.059479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.059619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.059632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.059703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.059716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.059789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.059802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.059901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.059914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.060116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.060130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.060370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.060386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.060453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.060466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.060615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.060633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.060703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.060718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.060851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.060869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.060947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.060960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.061025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.405 [2024-12-10 12:41:33.061038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.405 qpair failed and we were unable to recover it. 00:38:26.405 [2024-12-10 12:41:33.061112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.061125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.061279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.061293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.061446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.061459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.061547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.061563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.061651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.061663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.061739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.061752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.061924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.061937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.062096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.062109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.062175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.062188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.062282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.062296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.062388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.062400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.062608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.062622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.062715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.062727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.062895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.062908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.063052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.063066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.063202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.063216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.063373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.063387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.063530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.063544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.063678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.063690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.063763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.063776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.063839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.063851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.063989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.064094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.064190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.064286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.064438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.064610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.064754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.064855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.064930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.064942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.406 [2024-12-10 12:41:33.065889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.406 qpair failed and we were unable to recover it. 00:38:26.406 [2024-12-10 12:41:33.065966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.065980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.066068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.066081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.066223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.066236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.066343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.066356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.066508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.066524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.066604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.066616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.066763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.066776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.066860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.066873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.067011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.067024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.067236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.067253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.067490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.067503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.067572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.067585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.067650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.067663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.067813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.067826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.067895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.067908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.067985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.067998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.068081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.068093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.068253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.068267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.068373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.068386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.068523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.068536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.068602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.068614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.068714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.068730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.068813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.068831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.068976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.068990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.069192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.069206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.069294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.069308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.069417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.069429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.069515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.069528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.069622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.069635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.069721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.069733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.069801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.069813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.069892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.069905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.070040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.070054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.070191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.070205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.070298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.070311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.070401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.070414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.070568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.070581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.070677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.070691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.070857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.070870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.407 [2024-12-10 12:41:33.070955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.407 [2024-12-10 12:41:33.070968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.407 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.071104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.071117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.071201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.071214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.071290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.071303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.071540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.071553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.071756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.071772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.071931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.071944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.072081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.072094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.072162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.072181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.072317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.072331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.072416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.072428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.072568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.072581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.072646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.072659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.072889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.072902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.073048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.073061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.073159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.073177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.073265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.073278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.073364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.073377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.073457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.073470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.073668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.073685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.073793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.073806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.073944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.073957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.074164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.074187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.074264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.074277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.074347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.074361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.074505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.074518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.074600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.074613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.074681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.074694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.074851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.074865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.074932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.074945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.075081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.075093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.075181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.075195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.075338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.075351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.075423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.075436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.075576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.075589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.075668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.075681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.075827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.075840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.076000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.076019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.076111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.076124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.408 [2024-12-10 12:41:33.076272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.408 [2024-12-10 12:41:33.076286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.408 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.076420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.076434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.076573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.076586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.076665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.076677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.076749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.076762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.076906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.076919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.077121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.077137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.077225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.077239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.077299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.077319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.077395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.077407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.077548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.077562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.077766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.077779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.077936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.077949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.078032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.078046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.078196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.078209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.078299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.078313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.078451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.078465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.078596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.078609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.078673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.078686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.078775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.078788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.078855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.078868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.079022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.079034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.079105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.079118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.079268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.079282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.079519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.079533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.079609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.079622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.079705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.079718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.079921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.079934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.080103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.080117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.080189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.080203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.080365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.080378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.080475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.080488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.080568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.080581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.080681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.080694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.080755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.080769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.409 qpair failed and we were unable to recover it. 00:38:26.409 [2024-12-10 12:41:33.080858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.409 [2024-12-10 12:41:33.080871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.081829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.081842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.082914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.082927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.083070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.083083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.083163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.083198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.083356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.083370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.083449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.083462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.083546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.083559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.083695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.083708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.083846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.083858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.083933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.083947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.084015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.084028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.084186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.084200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.084353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.084367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.084441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.084455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.084537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.084550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.084688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.084701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.084840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.084853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.084918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.084931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.085071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.085084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.085172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.085186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.085395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.085414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.085497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.085511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.410 [2024-12-10 12:41:33.085602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.410 [2024-12-10 12:41:33.085615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.410 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.085825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.085838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.085923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.085936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.086974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.086987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.087969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.087983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.088095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.088179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.088350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.088429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.088593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.088687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.088770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.088919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.088993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.089007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.089077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.089091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.089263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.089278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.089472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.089486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.089632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.089650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.089816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.089829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.090009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.090022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.090188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.090202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.411 [2024-12-10 12:41:33.090282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.411 [2024-12-10 12:41:33.090295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.411 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.090380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.090394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.090475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.090488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.090570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.090584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.090649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.090662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.090808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.090821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.091944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.091957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.092976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.092989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.093061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.093074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.093280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.093300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.093284] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:38:26.412 [2024-12-10 12:41:33.093381] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.412 [2024-12-10 12:41:33.093384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.093402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.093602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.093617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.093688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.093701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.093856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.093869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.094040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.094054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.094131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.094145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.094315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.094328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.094394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.094408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.094481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.094501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.094586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.412 [2024-12-10 12:41:33.094599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.412 qpair failed and we were unable to recover it. 00:38:26.412 [2024-12-10 12:41:33.094735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.094748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.094811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.094824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.094902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.094916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.094997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.095010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.095231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.095245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.095404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.095417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.095493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.095506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.095585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.095598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.095733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.095746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.095952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.095966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.096062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.096075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.096190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.096204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.096299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.096313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.096461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.096474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.096544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.096558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.096638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.096651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.096797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.096811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.096937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.096951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.097031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.097045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.097129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.097142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.097292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.097307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.097440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.097455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.097593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.097606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.097691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.097705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.097846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.097861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.097947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.097961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.098036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.098050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.098187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.098203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.098354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.098368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.098523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.098538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.098609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.098623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.098762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.098786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.098937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.098951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.099029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.099043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.099201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.413 [2024-12-10 12:41:33.099215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.413 qpair failed and we were unable to recover it. 00:38:26.413 [2024-12-10 12:41:33.099302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.099316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.099393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.099406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.099617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.099631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.099771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.099785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.099877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.099891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.099967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.099980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.100068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.100082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.100216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.100230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.100318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.100332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.100493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.100507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.100633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.100646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.100798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.100820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.100958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.100971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.101033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.101046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.101210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.101224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.101358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.101371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.101556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.101569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.101718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.101732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.101860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.101878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.102009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.102022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.102191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.102204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.102288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.102301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.102458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.102471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.102641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.102657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.102731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.102744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.102884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.102897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.102971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.102984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.103084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.103097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.103183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.103196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.103289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.103302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.103452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.103465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.103597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.103610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.103691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.103704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.103907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.103920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.104060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.104072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.104135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.104148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.104363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.414 [2024-12-10 12:41:33.104377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.414 qpair failed and we were unable to recover it. 00:38:26.414 [2024-12-10 12:41:33.104510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.104523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.104693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.104707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.104859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.104872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.105919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.105932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.106005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.106018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.106110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.106123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.106211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.106225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.106402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.106416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.106571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.106588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.106719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.106732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.106867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.106880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.107053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.107066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.107218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.107231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.107380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.107394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.107536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.107550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.107640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.107653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.107731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.107745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.107814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.107827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.107909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.107924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.108086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.108235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.108375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.108462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.108605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.108771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.108852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.108925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.108991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.109004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.109164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.109180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.109342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.109355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.109432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.415 [2024-12-10 12:41:33.109445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.415 qpair failed and we were unable to recover it. 00:38:26.415 [2024-12-10 12:41:33.109587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.109600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.109759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.109773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.109978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.109991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.110147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.110160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.110264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.110277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.110441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.110455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.110526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.110543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.110710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.110723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.110874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.110888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.111092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.111105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.111261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.111275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.111420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.111434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.111502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.111516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.111584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.111606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.111749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.111763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.111856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.111870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.112038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.112052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.112187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.112202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.112352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.112366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.112573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.112587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.112671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.112685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.112834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.112848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.113039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.113124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.113240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.113416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.113522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.113626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.113838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.113922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.113994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.114087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.114193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.114279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.114441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.114586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.114735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.114826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.114981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.114994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.416 qpair failed and we were unable to recover it. 00:38:26.416 [2024-12-10 12:41:33.115134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.416 [2024-12-10 12:41:33.115147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.115285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.115298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.115439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.115453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.115587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.115601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.115682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.115695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.115841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.115856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.115939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.115952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.116136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.116149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.116303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.116316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.116397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.116410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.116504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.116517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.116654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.116667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.116812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.116825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.116916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.116930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.117020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.117033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.117180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.117195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.117337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.117363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.117512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.117525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.117670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.117683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.117770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.117783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.117870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.117883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.117969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.117983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.118117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.118131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.118268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.118282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.118362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.118376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.118457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.118470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.118565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.118578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.118721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.118734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.118812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.118832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.118973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.118987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.119099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.119114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.119245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.119265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.119405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.119420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.119517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.119531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.119615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.119629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.119772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.119786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.119924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.119937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.120182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.120197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.120347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.120361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.417 [2024-12-10 12:41:33.120563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.417 [2024-12-10 12:41:33.120576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.417 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.120752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.120765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.120850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.120863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.120952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.120965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.121964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.121977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.122110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.122124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.122206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.122220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.122435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.122448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.122523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.122536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.122614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.122627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.122819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.122832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.122897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.122910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.122996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.123009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.123192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.123206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.123356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.123370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.123468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.123483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.123549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.123563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.123635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.123649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.123792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.123806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.123955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.123969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.124058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.124072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.124145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.124162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.124302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.124317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.124391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.124405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.124474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.124488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.124623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.124638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.124783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.124797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.124875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.418 [2024-12-10 12:41:33.124889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.418 qpair failed and we were unable to recover it. 00:38:26.418 [2024-12-10 12:41:33.125029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.125043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.125187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.125201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.125347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.125362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.125495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.125508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.125645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.125658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.125812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.125826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.125924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.125939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.126020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.126034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.126128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.126142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.126314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.126329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.126470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.126485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.126626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.126640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.126711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.126725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.126860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.126874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.127026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.127041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.127188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.127202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.127362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.127376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.127543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.127558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.127646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.127660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.127801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.127815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.127974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.127993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.128205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.128220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.128429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.128443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.128598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.128612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.128762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.128776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.128848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.128862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.128959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.128974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.129080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.129093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.129208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.129240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.129417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.129455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.129629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.129662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.129824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.129841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.129912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.129926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.130010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.130030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.130105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.130119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.130280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.130294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.130444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.130458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.130539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.130553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.419 [2024-12-10 12:41:33.130754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.419 [2024-12-10 12:41:33.130768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.419 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.130983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.130998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.131069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.131082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.131246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.131261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.131469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.131484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.131633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.131648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.131796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.131810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.131948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.131962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.132040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.132142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.132157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.132309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.132324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.132461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.132475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.132682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.132696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.132846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.132860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.132929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.132943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.133147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.133161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.133238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.133252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.133337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.133351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.133504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.133518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.133589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.133604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.133747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.133762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.133842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.133856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.134036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.134051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.134134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.134148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.134246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.134262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.134413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.134427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.134526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.134541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.134608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.134622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.134760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.134774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.134848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.134862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.135017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.135031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.135115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.135130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.135221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.135236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.135408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.135421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.135508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.135522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.135684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.135701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.135785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.420 [2024-12-10 12:41:33.135799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.420 qpair failed and we were unable to recover it. 00:38:26.420 [2024-12-10 12:41:33.135976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.135990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.136128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.136153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.136303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.136318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.136403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.136417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.136488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.136501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.136590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.136604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.136688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.136702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.136941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.136955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.137115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.137128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.137271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.137286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.137366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.137380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.137451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.137466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.137562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.137581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.137822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.137836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.137994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.138008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.138239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.138254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.138343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.138357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.138455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.138469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.138556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.138570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.138651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.138665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.138760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.138775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.138864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.138878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.139029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.139044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.139277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.139291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.139383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.139397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.139538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.139566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.139657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.139683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.139938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.139961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.140053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.140074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.140194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.140216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.140442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.140463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.140653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.140670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.140743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.140769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.140853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.140867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.141003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.141017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.141090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.141104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.141200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.141215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.141308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.141323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.141417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.421 [2024-12-10 12:41:33.141434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.421 qpair failed and we were unable to recover it. 00:38:26.421 [2024-12-10 12:41:33.141596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.141610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.141679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.141693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.141762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.141775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.141911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.141925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.141991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.142005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.142077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.142091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.142238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.142253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.142336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.142351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.142428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.142443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.142557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.142588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.142762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.142786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.143010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.143036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.143187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.143206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.143310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.143326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.143389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.143402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.143481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.143494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.143678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.143693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.143841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.143854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.143988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.144001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.144089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.144102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.144240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.144254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.144428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.144450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.144617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.144631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.144840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.144855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.145097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.145112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.145203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.145218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.145318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.145341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.145520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.145545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.145755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.145790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.145901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.145918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.146081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.146096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.146265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.146281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.146356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.146371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.422 qpair failed and we were unable to recover it. 00:38:26.422 [2024-12-10 12:41:33.146522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.422 [2024-12-10 12:41:33.146537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.146626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.146640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.146877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.146891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.146974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.146988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.147138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.147153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.147298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.147313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.147412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.147430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.147505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.147519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.147680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.147695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.147775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.147790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.147857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.147871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.147957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.147970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.148110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.148124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.148317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.148333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.148439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.148455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.148620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.148635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.148720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.148739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.148824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.148839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.148909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.148923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.149081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.149096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.149242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.149257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.149326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.149340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.149487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.149502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.149639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.149654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.149740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.149755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.149828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.149842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.149952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.149967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.150050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.150064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.150212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.150227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.150365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.150380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.150449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.150464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.150611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.423 [2024-12-10 12:41:33.150625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.423 qpair failed and we were unable to recover it. 00:38:26.423 [2024-12-10 12:41:33.150773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.150788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.150946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.150970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.151069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.151096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.151210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.151239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.151328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.151346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.151424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.151438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.151516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.151532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.151625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.151641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.151847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.151861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.151959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.151973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.152062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.152077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.152282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.152297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.152382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.152397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.152549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.152563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.152649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.152665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.152750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.152764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.152838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.152852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.153013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.153027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.153176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.153192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.153277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.153292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.153388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.153401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.153485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.153504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.153615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.153629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.153714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.153739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.153891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.153907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.154098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.154113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.154209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.154224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.154316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.154330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.154419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.154434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.154515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.154533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.154609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.154623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.154772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.154787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.154973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.154988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.424 [2024-12-10 12:41:33.155067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.424 [2024-12-10 12:41:33.155081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.424 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.155238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.155252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.155352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.155378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.155462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.155477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.155555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.155569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.155637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.155651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.155721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.155736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.155816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.155831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.155990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.156211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.156322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.156433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.156519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.156610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.156693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.156788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.156899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.156913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.157071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.157085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.157230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.157246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.157384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.157399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.157563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.157579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.157665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.157890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.157906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.157984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.157999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.158085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.158101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.158192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.158207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.158443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.158457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.158546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.158561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.158701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.158715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.158851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.158866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.159010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.159025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.159095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.159109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.159214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.159229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.159305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.159320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.159409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.159424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.159588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.159603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.159757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.425 [2024-12-10 12:41:33.159772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.425 qpair failed and we were unable to recover it. 00:38:26.425 [2024-12-10 12:41:33.159861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.159877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.160014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.160030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.160106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.160121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.160282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.160298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.160383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.160398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.160533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.160548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.160639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.160653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.160810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.160825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.160911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.160926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.161078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.161093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.161174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.161189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.161381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.161406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.161505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.161526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.161672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.161694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.161775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.161796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.161936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.161960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.162150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.162187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.162341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.162359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.162447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.162463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.162648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.162663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.162836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.162853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.163045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.163060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.163146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.163162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.163325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.163341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.163494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.163516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.163830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.163859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.164038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.164063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.164229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.164249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.164340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.164356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.164523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.164538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.164676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.164692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.164847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.164863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.165015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.165031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.165117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.165131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.165289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.165305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.165396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.165411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.165491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.165506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.426 qpair failed and we were unable to recover it. 00:38:26.426 [2024-12-10 12:41:33.165727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.426 [2024-12-10 12:41:33.165742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.165900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.165915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.165986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.166001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.166136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.166152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.166317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.166332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.166413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.166427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.166505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.166519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.166698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.166714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.166804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.166821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.166994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.167957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.167973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.168045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.168060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.168142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.168157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.168263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.168280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.168363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.168378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.168518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.168536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.168607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.168622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.168720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.168735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.168948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.168964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.169031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.169046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.169187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.169205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.169299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.169315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.169395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.169409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.169484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.427 [2024-12-10 12:41:33.169500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.427 qpair failed and we were unable to recover it. 00:38:26.427 [2024-12-10 12:41:33.169652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.169668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.169825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.169840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.169914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.169930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.169997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.170013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.170077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.170092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.170193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.170211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.170368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.170383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.170483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.170499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.170570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.170585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.170720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.170737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.428 [2024-12-10 12:41:33.170886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.428 [2024-12-10 12:41:33.170901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.428 qpair failed and we were unable to recover it. 00:38:26.429 [2024-12-10 12:41:33.171044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.429 [2024-12-10 12:41:33.171059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.429 qpair failed and we were unable to recover it. 00:38:26.429 [2024-12-10 12:41:33.171142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.429 [2024-12-10 12:41:33.171171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.429 qpair failed and we were unable to recover it. 00:38:26.429 [2024-12-10 12:41:33.171330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.429 [2024-12-10 12:41:33.171349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.429 qpair failed and we were unable to recover it. 00:38:26.429 [2024-12-10 12:41:33.171454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.429 [2024-12-10 12:41:33.171469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.429 qpair failed and we were unable to recover it. 00:38:26.429 [2024-12-10 12:41:33.171570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.429 [2024-12-10 12:41:33.171586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.429 qpair failed and we were unable to recover it. 00:38:26.429 [2024-12-10 12:41:33.171736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.429 [2024-12-10 12:41:33.171751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.429 qpair failed and we were unable to recover it. 00:38:26.429 [2024-12-10 12:41:33.171833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.429 [2024-12-10 12:41:33.171849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.171983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.171999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.172157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.172180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.172251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.172265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.172472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.172487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.172566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.172585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.172787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.172802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.172895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.172911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.173091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.173106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.173314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.173330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.173440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.173457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.173536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.173551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.173706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.173721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.173810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.173825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.173906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.173922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.174012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.174027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.174173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.174189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.174283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.174299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.174436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.174451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.174532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.174548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.174622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.174637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.174850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.174866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.174965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.174980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.175051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.175079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.175152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.175171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.175270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.175285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.175372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.175387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.175536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.175552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.175644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.175659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.175814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.175830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.175900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.175916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.176006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.176023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.176179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.176216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.176301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.176316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.176405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.176420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.176501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.176516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.176674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.430 [2024-12-10 12:41:33.176689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.430 qpair failed and we were unable to recover it. 00:38:26.430 [2024-12-10 12:41:33.176781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.176796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.176958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.176973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.177043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.177057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.177128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.177143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.177367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.177383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.177535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.177550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.177754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.177769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.177853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.177868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.177935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.177953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.178089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.178105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.178255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.178271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.178358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.178373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.178447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.178463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.178609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.178624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.178692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.178707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.178825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.178841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.178972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.178987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.179070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.179085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.179235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.179250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.179382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.179398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.179469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.179485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.179559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.179574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.179673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.179688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.179772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.179788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.179924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.179939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.180982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.180997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.181091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.181106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.181178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.181194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.431 qpair failed and we were unable to recover it. 00:38:26.431 [2024-12-10 12:41:33.181270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.431 [2024-12-10 12:41:33.181286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.432 [2024-12-10 12:41:33.181441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.432 [2024-12-10 12:41:33.181456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.432 [2024-12-10 12:41:33.181542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.432 [2024-12-10 12:41:33.181557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.432 [2024-12-10 12:41:33.181639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.432 [2024-12-10 12:41:33.181653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.432 [2024-12-10 12:41:33.181735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.432 [2024-12-10 12:41:33.181750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.432 [2024-12-10 12:41:33.181824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.432 [2024-12-10 12:41:33.181838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.432 [2024-12-10 12:41:33.181925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.432 [2024-12-10 12:41:33.181940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.432 [2024-12-10 12:41:33.182106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.432 [2024-12-10 12:41:33.182121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.432 [2024-12-10 12:41:33.182309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.432 [2024-12-10 12:41:33.182325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.432 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.182399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.182414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.182554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.182569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.182662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.182681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.182762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.182777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.182875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.182890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.182976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.182991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.183067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.183082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.183245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.183261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.183345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.183360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.183453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.183468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.183626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.183640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.183722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.183736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.183881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.183895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.183979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.183999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.184962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.184976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.185042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.185056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.185137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.185152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.185239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.185253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.185398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.185412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.185477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.185490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.185559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.712 [2024-12-10 12:41:33.185575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.712 qpair failed and we were unable to recover it. 00:38:26.712 [2024-12-10 12:41:33.185724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.185738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.185800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.185813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.185961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.185976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.186059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.186222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.186329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.186427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.186531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.186637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.186817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.186930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.186998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.187011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.187152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.187171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.187237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.187255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.187484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.187498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.187590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.187604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.187747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.187761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.187832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.187847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.187923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.187937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.188074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.188088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.188233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.188248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.188403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.188417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.188485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.188499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.188579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.188600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.188744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.188758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.188891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.188905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.189911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.189925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.190003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.190017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.190091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.190105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.190241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.713 [2024-12-10 12:41:33.190255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.713 qpair failed and we were unable to recover it. 00:38:26.713 [2024-12-10 12:41:33.190391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.190405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.190483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.190498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.190644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.190658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.190802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.190817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.190907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.190921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.191981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.191995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.192070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.192085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.192218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.192236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.192417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.192432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.192500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.192514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.192663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.192677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.192813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.192827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.193946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.193960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.714 [2024-12-10 12:41:33.194963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.714 [2024-12-10 12:41:33.194977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.714 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.195947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.195961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.196120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.196134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.196281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.196295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.196448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.196462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.196544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.196556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.196694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.196707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.196781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.196796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.196877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.196890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.196974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.196988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.197962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.197975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.198139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.198152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.198228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.198243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.198322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.198336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.198482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.198495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.198579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.198593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.198680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.198695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.198764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.198782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.198849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.198864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.199014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.199028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.715 [2024-12-10 12:41:33.199105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.715 [2024-12-10 12:41:33.199118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.715 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.199196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.199211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.199280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.199293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.199431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.199444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.199515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.199529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.199602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.199616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.199749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.199763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.199844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.199857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.199998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.200965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.200981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.201950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.201964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.202049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.202063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.202141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.202154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.202248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.202262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.202414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.202427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.202498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.716 [2024-12-10 12:41:33.202511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.716 qpair failed and we were unable to recover it. 00:38:26.716 [2024-12-10 12:41:33.202592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.202606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.202681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.202694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.202831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.202844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.202916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.202929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.203896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.203986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.204008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.204104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.204127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.204218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.204240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.204403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.204423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.204516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.204531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.204667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.204680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.204766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.204779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.204986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.204999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.205134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.205148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.205244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.205259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.205395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.205409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.205475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.205491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.205583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.205596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.205673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.205688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.205757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.205771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.205851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.205865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.206009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.206022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.206098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.206112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.206251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.206266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.206402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.206417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.206511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.206529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.206676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.206690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.206826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.206840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.206985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.206999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.207142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.207155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.207247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.207262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.717 qpair failed and we were unable to recover it. 00:38:26.717 [2024-12-10 12:41:33.207398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.717 [2024-12-10 12:41:33.207411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.207629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.207643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.207781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.207794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.207869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.207882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.207948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.207961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.208091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.208105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.208242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.208257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.208357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.208371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.208506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.208519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.208655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.208668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.208752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.208765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.208847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.208861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.208949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.208977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.209101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.209142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.209316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.209340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.209428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.209447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.209589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.209603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.209749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.209764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.209834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.209848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.209946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.209959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.210961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.210974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.211975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.211989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.718 qpair failed and we were unable to recover it. 00:38:26.718 [2024-12-10 12:41:33.212061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.718 [2024-12-10 12:41:33.212074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.212957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.212970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.213044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.213057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.213194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.213208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.213300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.213313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.213407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.213433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.213552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.213582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.213674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.213698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.213797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.213813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.213896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.213910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.214060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.214139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.214245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.214325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.214560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.214668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.214763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.214854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.214989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.215917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.215929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.216001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.216014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.216090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.216103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.216181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.719 [2024-12-10 12:41:33.216194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.719 qpair failed and we were unable to recover it. 00:38:26.719 [2024-12-10 12:41:33.216271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.216285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.216353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.216367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.216510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.216523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.216609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.216621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.216689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.216702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.216768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.216781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.216943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.216957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.217960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.217982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.218073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.218095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.218261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.218276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.218346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.218359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.218439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.218452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.218517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.218530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.218611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.218624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.218712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.218725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.218875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.218888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.219032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.219045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.219122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.219135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.219310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.219324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.219409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.219425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.219504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.219517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.219610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.219622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.219707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.219721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.219834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.219847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.220030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.220043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.220125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.220139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.220294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.220307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.220372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.220386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.220519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.220532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.220606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.220619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.220681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.220694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.720 [2024-12-10 12:41:33.220764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.720 [2024-12-10 12:41:33.220777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.720 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.220860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.220873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.220949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.220962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.221071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.221228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.221312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.221485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.221576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.221664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.221743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.221909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.221991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.222221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.222349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.222450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.222621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.222707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.222792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.222877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.222966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.222979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.223112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.223125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.223209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.223223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.223383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.223396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.223482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.223495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.223669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.223682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.223760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.223773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.223850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.223863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.223951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.223965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.224043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.224059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.224192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.224205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.224272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.224286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.224353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.224366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.224432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.224445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.224538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.224552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.224625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.224638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.721 qpair failed and we were unable to recover it. 00:38:26.721 [2024-12-10 12:41:33.224723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.721 [2024-12-10 12:41:33.224735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.224802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.224815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.224953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.224967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.225144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.225158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.225349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.225378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.225474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.225499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.225604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.225628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.225826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.225842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.225979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.225992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.226977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.226990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.227080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.227092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.227164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.227192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.227403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.227416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.227498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.227512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.227597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.227611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.227776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.227790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.227943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.227956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.228156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.228175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.228252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.228265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.228420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.228433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.228507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.228519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.228605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.228618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.228679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.228692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.228836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.228849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.229005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.229018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.229089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.229102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.229330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.229344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.229418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.229431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.229497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.229510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.229707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.229720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.229810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.229823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.722 qpair failed and we were unable to recover it. 00:38:26.722 [2024-12-10 12:41:33.229915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.722 [2024-12-10 12:41:33.229928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.230073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.230087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.230170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.230183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.230283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.230297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.230442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.230455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.230593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.230606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.230685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.230698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.230787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.230804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.230977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.230990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.231967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.231980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.232934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.232947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.233940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.233953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.234017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.234030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.234107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.234120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.234259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.723 [2024-12-10 12:41:33.234272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.723 qpair failed and we were unable to recover it. 00:38:26.723 [2024-12-10 12:41:33.234363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.234376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.234442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.234454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.234542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.234556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.234620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.234633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.234785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.234870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.234883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.234949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.234961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.235985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.235998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.236935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.236948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.237944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.237958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.238028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.238118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.238131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.238203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.724 [2024-12-10 12:41:33.238216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.724 qpair failed and we were unable to recover it. 00:38:26.724 [2024-12-10 12:41:33.238289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.238302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.238362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.238375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.238439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.238452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.238589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.238602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.238684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.238697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.238765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.238777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.238909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.238922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.238990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.239872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.239885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.240891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.240907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.241937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.241951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.725 qpair failed and we were unable to recover it. 00:38:26.725 [2024-12-10 12:41:33.242028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.725 [2024-12-10 12:41:33.242042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.242902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:26.726 [2024-12-10 12:41:33.242963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.242976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.243931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.243946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.244975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.244990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.245049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.245064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.245134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.245149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.245296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.245312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.245398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.245413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.245560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.245575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.245708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.245723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.245790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.726 [2024-12-10 12:41:33.245805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.726 qpair failed and we were unable to recover it. 00:38:26.726 [2024-12-10 12:41:33.245888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.245903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.245969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.245984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.246074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.246091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.246231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.246247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.246391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.246407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.246539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.246554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.246695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.246710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.246793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.246809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.246893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.246908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.246984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.246999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.247191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.247207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.247348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.247363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.247499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.247514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.247578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.247592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.247660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.247675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.247740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.247756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.247825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.247841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.247908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.247923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.248916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.248994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.249010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.249082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.249098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.249164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.249183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.249253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.249270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.249338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.249354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.249606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.249622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.249777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.249792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.249999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.250014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.250084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.250099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.250186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.250201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.250269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.250284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.727 [2024-12-10 12:41:33.250419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.727 [2024-12-10 12:41:33.250435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.727 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.250595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.250610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.250686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.250702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.250874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.250892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.251910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.251924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.252006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.252021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.252161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.252179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.252250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.252265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.252359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.252374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.252456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.252471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.252542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.252557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.252758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.252773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.252847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.252862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.253915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.253931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.254006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.254022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.254105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.254121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.254195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.254211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.254353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.254369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.254440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.254456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.254531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.254546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.254773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.254789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.254942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.254958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.255035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.255051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.255131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.728 [2024-12-10 12:41:33.255146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.728 qpair failed and we were unable to recover it. 00:38:26.728 [2024-12-10 12:41:33.255292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.255308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.255477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.255492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.255630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.255646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.255783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.255800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.255940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.255956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.256853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.256869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.257016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.257099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.257254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.257419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.257511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.257666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.257747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.257913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.257993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.258077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.258240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.258340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.258426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.258575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.258672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.258757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.258865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.258894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.259079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.259110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.259239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.259274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.259432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.259449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.259543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.259558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.729 qpair failed and we were unable to recover it. 00:38:26.729 [2024-12-10 12:41:33.259640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.729 [2024-12-10 12:41:33.259660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.259885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.259901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.260974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.260990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.261066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.261082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.261216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.261233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.261386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.261403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.261549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.261564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.261765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.261781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.261920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.261935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.262070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.262085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.262163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.262188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.262276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.262292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.262388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.262405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.262653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.262669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.262746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.262761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.262840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.262855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.262924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.262938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.263903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.263919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.264045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.264072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.264196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.264228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.264331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.264358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.264479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.264497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.730 qpair failed and we were unable to recover it. 00:38:26.730 [2024-12-10 12:41:33.264588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.730 [2024-12-10 12:41:33.264603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.264689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.264705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.264770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.264785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.264888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.264904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.264968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.264982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.265142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.265158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.265251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.265266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.265367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.265383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.265456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.265471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.265610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.265627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.265706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.265721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.265871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.265887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.265956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.265971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.266043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.266058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.266224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.266249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.266323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.266339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.266486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.266501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.266581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.266596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.266669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.266685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.266770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.266786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.266862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.266877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.267975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.267990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.268058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.268073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.268219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.268235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.268324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.268339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.268406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.268422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.268504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.268519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.268604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.268619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.268711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.268736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.268908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.268937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.269029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.269052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.731 [2024-12-10 12:41:33.269157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.731 [2024-12-10 12:41:33.269185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.731 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.269275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.269299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.269462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.269485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.269579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.269602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.269705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.269728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.269824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.269847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.269940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.269958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.270026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.270040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.270121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.270136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.270321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.270337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.270418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.270435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.270560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.270574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.270654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.270669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.270745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.270760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.270911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.270926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.271075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.271091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.271267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.271282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.271352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.271368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.271526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.271541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.271684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.271699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.271788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.271803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.271879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.271893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.272936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.272952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.732 [2024-12-10 12:41:33.273834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.732 qpair failed and we were unable to recover it. 00:38:26.732 [2024-12-10 12:41:33.273911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.273928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.274018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.274033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.274202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.274218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.274282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.274297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.274382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.274396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.274494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.274509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.274600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.274615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.274728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.274744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.274906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.274921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.275053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.275069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.275134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.275152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.275306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.275321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.275420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.275435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.275530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.275545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.275638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.275653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.275729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.275745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.275885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.275900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.276963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.276982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.277864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.277878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.278027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.278041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.278119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.278133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.278242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.733 [2024-12-10 12:41:33.278269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.733 qpair failed and we were unable to recover it. 00:38:26.733 [2024-12-10 12:41:33.278446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.278472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.278579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.278604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.278751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.278767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.278843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.278858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.278929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.278944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.279925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.279987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.280945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.280960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.281100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.281115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.281336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.281352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.281426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.281441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.281521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.281536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.281613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.281627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.281764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.281780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.281878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.281893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.281971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.281986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.282058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.282074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.282216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.282232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.282305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.734 [2024-12-10 12:41:33.282320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.734 qpair failed and we were unable to recover it. 00:38:26.734 [2024-12-10 12:41:33.282386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.282401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.282476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.282491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.282570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.282585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.282667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.282682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.282817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.282835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.282921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.282936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.283025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.283202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.283312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.283462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.283574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.283660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.283755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.283857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.283992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.284070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.284184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.284272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.284463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.284551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.284662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.284761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.284944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.284959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.285057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.285072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.285158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.285177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.285246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.285261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.285401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.285417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.285512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.285527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.285669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.285684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.285773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.285788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.285866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.285881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.735 qpair failed and we were unable to recover it. 00:38:26.735 [2024-12-10 12:41:33.286943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.735 [2024-12-10 12:41:33.286958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.287049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.287065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.287174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.287189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.287351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.287366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.287513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.287528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.287752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.287769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.287915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.287931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.287996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.288164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.288288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.288372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.288467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.288554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.288667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.288828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.288921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.288937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.289084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.289099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.289200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.289215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.289314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.289329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.289400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.289415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.289572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.289587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.289741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.289756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.289829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.289845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.289999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.290174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.290275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.290370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.290468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.290626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.290803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.290903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.290984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.290999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.291085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.291099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.291179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.291195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.291347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.291362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.291439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.291454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.291599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.291614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.291681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.291696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.291844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.291859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.736 [2024-12-10 12:41:33.291939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.736 [2024-12-10 12:41:33.291955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.736 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.292028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.292043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.292180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.292197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.292266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.292282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.292427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.292443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.292540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.292560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.292706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.292724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.292807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.292822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.292902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.292917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.293931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.293946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.294940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.294956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.295089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.295103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.295175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.295191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.295328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.295343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.295434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.295449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.295517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.295532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.295701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.295716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.295867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.295881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.295947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.295961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.296035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.296049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.296125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.296139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.296237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.296252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.296326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.296341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.296495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-12-10 12:41:33.296510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.737 qpair failed and we were unable to recover it. 00:38:26.737 [2024-12-10 12:41:33.296651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.296666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.296798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.296813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.296896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.296911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.296988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.297914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.297990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.298006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.298184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.298200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.298276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.298292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.298436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.298452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.298588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.298603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.298683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.298698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.298773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.298789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.298857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.298872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.299946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.299961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.300052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.300068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.300138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.300152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.300231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.300245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.300395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.738 [2024-12-10 12:41:33.300409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.738 qpair failed and we were unable to recover it. 00:38:26.738 [2024-12-10 12:41:33.300481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.300495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.300561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.300575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.300642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.300656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.300798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.300813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.300953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.300968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.301978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.301994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.302868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.302883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.303900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.303915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.304001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.304016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.304181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.304197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.304277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.304293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.304453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.304468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.304539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.304554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.304706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.304721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.739 qpair failed and we were unable to recover it. 00:38:26.739 [2024-12-10 12:41:33.304822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-12-10 12:41:33.304838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.304917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.304932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.305091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.305107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.305180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.305196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.305335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.305349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.305422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.305437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.305642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.305656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.305740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.305756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.305900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.305915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.306049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.306064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.306155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.306176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.306376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.306392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.306482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.306499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.306574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.306589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.306665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.306680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.306767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.306782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.306852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.306867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.307948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.307963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.308905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.308921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.309081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.309097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.309163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.309181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.309267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.309282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.309369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.309384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.740 qpair failed and we were unable to recover it. 00:38:26.740 [2024-12-10 12:41:33.309470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-12-10 12:41:33.309485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.309621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.309637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.309709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.309724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.309870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.309885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.309966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.309981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.310065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.310080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.310157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.310175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.310240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.310256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.310329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.310344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.310411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.310426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.310508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.310523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.310662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.310677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.310830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.310845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.311050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.311067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.311199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.311214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.311390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.311405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.311502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.311517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.311587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.311603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.311751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.311766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.311844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.311859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.312011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.312027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.312162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.312198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.312310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.312326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.312406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.312421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.312500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.312515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.312589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.312604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.312740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.312755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.312846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.312861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.313943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-12-10 12:41:33.313958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.741 qpair failed and we were unable to recover it. 00:38:26.741 [2024-12-10 12:41:33.314038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.314919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.314988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.315923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.315938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.316009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.316024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.316170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.316186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.316260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.316276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.316351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.316366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.316504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.316520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.316658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.316673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.316854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.316870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.317964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.317979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.318116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.318134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.318219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.318235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.318390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.318406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.318475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.742 [2024-12-10 12:41:33.318490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.742 qpair failed and we were unable to recover it. 00:38:26.742 [2024-12-10 12:41:33.318572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.318587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.318663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.318678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.318786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.318814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.318988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.319931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.319946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.320837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.320990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.321006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.321141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.321156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.321336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.321352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.321506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.321521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.321610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.321624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.321697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.321712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.321850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.321865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.321929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.321944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.322027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.322043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.322118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.743 [2024-12-10 12:41:33.322133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.743 qpair failed and we were unable to recover it. 00:38:26.743 [2024-12-10 12:41:33.322201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.322223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.322304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.322318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.322393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.322408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.322475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.322490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.322574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.322589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.322656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.322672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.322766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.322780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.322921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.322937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.323019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.323033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.323121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.323147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.323245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.323267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.323434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.323457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.323544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.323565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.323679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.323703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.323788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.323811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.323900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.323917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.324915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.324932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.325017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.325144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.325232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.325318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.325485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.325748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.325834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.325927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.325992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.326007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.326085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.744 [2024-12-10 12:41:33.326100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.744 qpair failed and we were unable to recover it. 00:38:26.744 [2024-12-10 12:41:33.326254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.326270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.326348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.326364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.326495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.326510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.326597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.326612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.326687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.326703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.326794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.326809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.326942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.326957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.327121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.327222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.327370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.327456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.327553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.327636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.327717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.327867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.327977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.328004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.328180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.328207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.328366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.328390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.328493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.328515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.328682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.328705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.328781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.328797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.328871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.328886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.328978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.328993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.329864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.329879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.330021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.330036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.330126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.330141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.330220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.330236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.330346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.330362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.330430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.330445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.330576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.330591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.330660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-12-10 12:41:33.330675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-12-10 12:41:33.330748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.330764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.330845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.330860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.330944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.330964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.331096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.331112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.331186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.331202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.331272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.331287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.331441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.331456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.331532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.331547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.331627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.331642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.331722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.331737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.331852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.331867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.332952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.332968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.333918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.333933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-12-10 12:41:33.334862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-12-10 12:41:33.334938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.334953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.335087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.335102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.335241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.335257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.335331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.335346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.335418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.335433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.335514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.335529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.335664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.335679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.335920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.335936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.336141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.336156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.336332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.336347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.336429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.336444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.336522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.336536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.336638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.336653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.336797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.336812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.336893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.336908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.336972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.336987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.337071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.337098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.337257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.337282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.337384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.337406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.337495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.337512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.337583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.337598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.337657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.337672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.337745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.337761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.337830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.337846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.338012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.338027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.338108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.338124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.338199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.338214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.338351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.338367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.338518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.338533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.338603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.338620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.338700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.338715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.338786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.338801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.339015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.339031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.339179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.339200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.339284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.339299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.339375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.339390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.339527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.339542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.339690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.339705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.339862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-12-10 12:41:33.339877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-12-10 12:41:33.340046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.340145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.340255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.340345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.340426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.340506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.340607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.340769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.340873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.340889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.341901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.341986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.342001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.342088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.342102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.342249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.342265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.342414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.342428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.342504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.342520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.342602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.342617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.342786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.342801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.342886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.342901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-12-10 12:41:33.343879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-12-10 12:41:33.343894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.344958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.344973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.345946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.345961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.346979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.346995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.347063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.347078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.347164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.347184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.347328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.347343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.347439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.347454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.347599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.347614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.347762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.347777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.347851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.347938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.347953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.348088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.348102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-12-10 12:41:33.348239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-12-10 12:41:33.348255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.348428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.348444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.348518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.348534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.348631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.348646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.348799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.348814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.348895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.348911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.348982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.348998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.349160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.349179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.349324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.349340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.349539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.349554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.349636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.349652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.349788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.349804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.349871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.349886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.349960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.349975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.350070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.350241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.350333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.350487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.350595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.350693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.350862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.350953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.350968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.750 [2024-12-10 12:41:33.351003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.750 [2024-12-10 12:41:33.351014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.750 [2024-12-10 12:41:33.351026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.750 [2024-12-10 12:41:33.351034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.750 [2024-12-10 12:41:33.351107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.351201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.351281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.351364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.351456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.351565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.351647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.351763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.351935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.351950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.352023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.352037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.352113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.352128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.352207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.352223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.352308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.352323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.352410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.352425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.352509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.352525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-12-10 12:41:33.352618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-12-10 12:41:33.352633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.352841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.352856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.352941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.352956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.353024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.353039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.353178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.353195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.353263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.353278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.353346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.353361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.353457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:26.751 [2024-12-10 12:41:33.353562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.353576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.353546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:26.751 [2024-12-10 12:41:33.353612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:26.751 [2024-12-10 12:41:33.353635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:26.751 [2024-12-10 12:41:33.353732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.353748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.353820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.353834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.353905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.353919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.354927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.354997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.355013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.355089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.355105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.355176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.355191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.355296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.355311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.355406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.355436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-12-10 12:41:33.355543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.355569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 A controller has encountered a failure and is being reset. 00:38:26.751 [2024-12-10 12:41:33.355758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-12-10 12:41:33.355794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325d00 with addr=10.0.0.2, port=4420 00:38:26.751 [2024-12-10 12:41:33.355816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:38:26.751 [2024-12-10 12:41:33.355845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:38:26.751 [2024-12-10 12:41:33.355868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:38:26.751 [2024-12-10 12:41:33.355888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:38:26.751 [2024-12-10 12:41:33.355911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:38:26.751 Unable to reset the controller. 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.319 12:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.319 Malloc0 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.319 [2024-12-10 12:41:34.059997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.319 [2024-12-10 12:41:34.088313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.319 12:41:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3903001 00:38:27.887 Controller properly reset. 00:38:33.165 Initializing NVMe Controllers 00:38:33.165 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:33.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:33.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:33.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:33.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:33.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:33.165 Initialization complete. Launching workers. 00:38:33.165 Starting thread on core 1 00:38:33.165 Starting thread on core 2 00:38:33.165 Starting thread on core 3 00:38:33.165 Starting thread on core 0 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:33.165 00:38:33.165 real 0m11.347s 00:38:33.165 user 0m37.140s 00:38:33.165 sys 0m5.999s 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.165 ************************************ 00:38:33.165 END TEST nvmf_target_disconnect_tc2 00:38:33.165 ************************************ 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:33.165 rmmod nvme_tcp 00:38:33.165 rmmod nvme_fabrics 00:38:33.165 rmmod nvme_keyring 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3903565 ']' 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3903565 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3903565 ']' 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3903565 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3903565 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3903565' 00:38:33.165 killing process with pid 3903565 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3903565 00:38:33.165 12:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3903565 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:34.103 12:41:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:36.639 12:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:36.639 00:38:36.639 real 0m20.626s 00:38:36.639 user 1m6.926s 00:38:36.639 sys 0m10.689s 00:38:36.639 12:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.639 12:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:36.639 ************************************ 00:38:36.639 END TEST nvmf_target_disconnect 00:38:36.639 ************************************ 00:38:36.639 12:41:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:36.639 00:38:36.639 real 8m6.789s 00:38:36.639 user 19m21.059s 00:38:36.639 sys 2m7.235s 00:38:36.639 12:41:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.639 12:41:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.639 ************************************ 00:38:36.639 END TEST nvmf_host 00:38:36.639 ************************************ 00:38:36.639 12:41:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:36.639 12:41:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:36.639 12:41:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:36.639 12:41:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:36.639 12:41:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.639 12:41:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:36.639 ************************************ 00:38:36.639 START TEST nvmf_target_core_interrupt_mode 00:38:36.639 ************************************ 00:38:36.639 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:36.639 * Looking for test storage... 00:38:36.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:36.639 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:36.639 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:38:36.639 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:36.639 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:36.639 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:36.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.640 --rc genhtml_branch_coverage=1 00:38:36.640 --rc genhtml_function_coverage=1 00:38:36.640 --rc genhtml_legend=1 00:38:36.640 --rc geninfo_all_blocks=1 00:38:36.640 --rc geninfo_unexecuted_blocks=1 00:38:36.640 00:38:36.640 ' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:36.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.640 --rc genhtml_branch_coverage=1 00:38:36.640 --rc genhtml_function_coverage=1 00:38:36.640 --rc genhtml_legend=1 00:38:36.640 --rc geninfo_all_blocks=1 00:38:36.640 --rc geninfo_unexecuted_blocks=1 00:38:36.640 00:38:36.640 ' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:36.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.640 --rc genhtml_branch_coverage=1 00:38:36.640 --rc genhtml_function_coverage=1 00:38:36.640 --rc genhtml_legend=1 00:38:36.640 --rc geninfo_all_blocks=1 00:38:36.640 --rc geninfo_unexecuted_blocks=1 00:38:36.640 00:38:36.640 ' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:36.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.640 --rc genhtml_branch_coverage=1 00:38:36.640 --rc genhtml_function_coverage=1 00:38:36.640 --rc genhtml_legend=1 00:38:36.640 --rc geninfo_all_blocks=1 00:38:36.640 --rc geninfo_unexecuted_blocks=1 00:38:36.640 00:38:36.640 ' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:36.640 ************************************ 00:38:36.640 START TEST nvmf_abort 00:38:36.640 ************************************ 00:38:36.640 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:36.640 * Looking for test storage... 00:38:36.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:36.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.641 --rc genhtml_branch_coverage=1 00:38:36.641 --rc genhtml_function_coverage=1 00:38:36.641 --rc genhtml_legend=1 00:38:36.641 --rc geninfo_all_blocks=1 00:38:36.641 --rc geninfo_unexecuted_blocks=1 00:38:36.641 00:38:36.641 ' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:36.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.641 --rc genhtml_branch_coverage=1 00:38:36.641 --rc genhtml_function_coverage=1 00:38:36.641 --rc genhtml_legend=1 00:38:36.641 --rc geninfo_all_blocks=1 00:38:36.641 --rc geninfo_unexecuted_blocks=1 00:38:36.641 00:38:36.641 ' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:36.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.641 --rc genhtml_branch_coverage=1 00:38:36.641 --rc genhtml_function_coverage=1 00:38:36.641 --rc genhtml_legend=1 00:38:36.641 --rc geninfo_all_blocks=1 00:38:36.641 --rc geninfo_unexecuted_blocks=1 00:38:36.641 00:38:36.641 ' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:36.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.641 --rc genhtml_branch_coverage=1 00:38:36.641 --rc genhtml_function_coverage=1 00:38:36.641 --rc genhtml_legend=1 00:38:36.641 --rc geninfo_all_blocks=1 00:38:36.641 --rc geninfo_unexecuted_blocks=1 00:38:36.641 00:38:36.641 ' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:36.641 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:36.642 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:41.915 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:41.915 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:41.915 Found net devices under 0000:af:00.0: cvl_0_0 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:41.915 Found net devices under 0000:af:00.1: cvl_0_1 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:41.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:38:41.915 00:38:41.915 --- 10.0.0.2 ping statistics --- 00:38:41.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.915 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:38:41.915 00:38:41.915 --- 10.0.0.1 ping statistics --- 00:38:41.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.915 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3908229 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3908229 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3908229 ']' 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.915 12:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.915 [2024-12-10 12:41:48.602191] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:41.915 [2024-12-10 12:41:48.604192] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:38:41.915 [2024-12-10 12:41:48.604277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.915 [2024-12-10 12:41:48.720418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:42.173 [2024-12-10 12:41:48.826437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:42.173 [2024-12-10 12:41:48.826478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:42.173 [2024-12-10 12:41:48.826490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:42.173 [2024-12-10 12:41:48.826499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:42.173 [2024-12-10 12:41:48.826508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:42.173 [2024-12-10 12:41:48.828727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:42.173 [2024-12-10 12:41:48.828794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.173 [2024-12-10 12:41:48.828806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:42.432 [2024-12-10 12:41:49.146561] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:42.432 [2024-12-10 12:41:49.147522] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:42.432 [2024-12-10 12:41:49.147993] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:42.432 [2024-12-10 12:41:49.148218] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.691 [2024-12-10 12:41:49.453903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.691 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.950 Malloc0 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.950 Delay0 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.950 [2024-12-10 12:41:49.585720] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.950 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:42.950 [2024-12-10 12:41:49.775289] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:45.483 Initializing NVMe Controllers 00:38:45.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:45.484 controller IO queue size 128 less than required 00:38:45.484 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:45.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:45.484 Initialization complete. Launching workers. 00:38:45.484 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34439 00:38:45.484 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34496, failed to submit 66 00:38:45.484 success 34439, unsuccessful 57, failed 0 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:45.484 rmmod nvme_tcp 00:38:45.484 rmmod nvme_fabrics 00:38:45.484 rmmod nvme_keyring 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3908229 ']' 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3908229 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3908229 ']' 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3908229 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3908229 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3908229' 00:38:45.484 killing process with pid 3908229 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3908229 00:38:45.484 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3908229 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:46.861 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.765 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:48.765 00:38:48.765 real 0m12.282s 00:38:48.765 user 0m12.389s 00:38:48.765 sys 0m5.348s 00:38:48.765 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.765 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.765 ************************************ 00:38:48.765 END TEST nvmf_abort 00:38:48.765 ************************************ 00:38:48.765 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:48.765 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:48.765 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.765 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:48.765 ************************************ 00:38:48.765 START TEST nvmf_ns_hotplug_stress 00:38:48.765 ************************************ 00:38:48.765 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:49.024 * Looking for test storage... 00:38:49.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:49.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.024 --rc genhtml_branch_coverage=1 00:38:49.024 --rc genhtml_function_coverage=1 00:38:49.024 --rc genhtml_legend=1 00:38:49.024 --rc geninfo_all_blocks=1 00:38:49.024 --rc geninfo_unexecuted_blocks=1 00:38:49.024 00:38:49.024 ' 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:49.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.024 --rc genhtml_branch_coverage=1 00:38:49.024 --rc genhtml_function_coverage=1 00:38:49.024 --rc genhtml_legend=1 00:38:49.024 --rc geninfo_all_blocks=1 00:38:49.024 --rc geninfo_unexecuted_blocks=1 00:38:49.024 00:38:49.024 ' 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:49.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.024 --rc genhtml_branch_coverage=1 00:38:49.024 --rc genhtml_function_coverage=1 00:38:49.024 --rc genhtml_legend=1 00:38:49.024 --rc geninfo_all_blocks=1 00:38:49.024 --rc geninfo_unexecuted_blocks=1 00:38:49.024 00:38:49.024 ' 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:49.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.024 --rc genhtml_branch_coverage=1 00:38:49.024 --rc genhtml_function_coverage=1 00:38:49.024 --rc genhtml_legend=1 00:38:49.024 --rc geninfo_all_blocks=1 00:38:49.024 --rc geninfo_unexecuted_blocks=1 00:38:49.024 00:38:49.024 ' 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:49.024 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:49.025 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:54.295 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:54.295 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:54.295 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:54.296 Found net devices under 0000:af:00.0: cvl_0_0 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:54.296 Found net devices under 0000:af:00.1: cvl_0_1 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:54.296 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:54.296 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:54.296 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:54.296 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.555 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:54.555 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:54.555 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:54.555 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:54.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:54.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:38:54.555 00:38:54.555 --- 10.0.0.2 ping statistics --- 00:38:54.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.555 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:38:54.555 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:54.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:54.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:38:54.555 00:38:54.555 --- 10.0.0.1 ping statistics --- 00:38:54.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.555 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:38:54.555 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:54.555 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3912461 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3912461 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3912461 ']' 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.556 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:54.556 [2024-12-10 12:42:01.299397] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:54.556 [2024-12-10 12:42:01.301597] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:38:54.556 [2024-12-10 12:42:01.301668] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.814 [2024-12-10 12:42:01.418057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:54.814 [2024-12-10 12:42:01.525719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.814 [2024-12-10 12:42:01.525761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:54.814 [2024-12-10 12:42:01.525774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.814 [2024-12-10 12:42:01.525784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.814 [2024-12-10 12:42:01.525793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.814 [2024-12-10 12:42:01.528101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:54.814 [2024-12-10 12:42:01.528172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.814 [2024-12-10 12:42:01.528186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:55.073 [2024-12-10 12:42:01.849253] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:55.073 [2024-12-10 12:42:01.850348] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:55.073 [2024-12-10 12:42:01.851030] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:55.073 [2024-12-10 12:42:01.851253] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:55.357 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.357 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:55.357 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:55.357 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:55.357 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.357 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:55.357 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:55.357 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:55.636 [2024-12-10 12:42:02.317189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:55.636 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:55.905 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:55.905 [2024-12-10 12:42:02.713619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:56.164 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:56.164 12:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:56.423 Malloc0 00:38:56.423 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:56.682 Delay0 00:38:56.682 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:56.940 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:56.940 NULL1 00:38:56.940 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:57.199 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3912979 00:38:57.199 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:57.199 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:38:57.199 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:57.458 12:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.717 12:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:57.717 12:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:57.717 true 00:38:57.976 12:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:38:57.976 12:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:57.976 12:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.235 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:58.235 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:58.493 true 00:38:58.493 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:38:58.493 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:58.751 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.010 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:59.010 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:59.010 true 00:38:59.269 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:38:59.269 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.269 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.527 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:59.527 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:59.786 true 00:38:59.786 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:38:59.786 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.046 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.304 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:00.304 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:00.563 true 00:39:00.563 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:00.563 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.821 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.821 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:00.821 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:01.080 true 00:39:01.080 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:01.080 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.338 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.597 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:01.597 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:01.597 true 00:39:01.597 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:01.597 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.856 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.115 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:02.115 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:02.374 true 00:39:02.374 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:02.374 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.632 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.891 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:02.891 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:02.891 true 00:39:03.151 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:03.151 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.151 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:03.410 12:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:03.410 12:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:03.669 true 00:39:03.669 12:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:03.669 12:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.928 12:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.186 12:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:04.186 12:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:04.445 true 00:39:04.445 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:04.445 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:04.445 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.704 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:04.704 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:04.963 true 00:39:04.963 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:04.963 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.221 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.480 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:05.480 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:05.480 true 00:39:05.480 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:05.480 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.739 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.998 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:05.998 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:06.256 true 00:39:06.256 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:06.256 12:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.515 12:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.774 12:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:06.774 12:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:06.774 true 00:39:06.774 12:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:06.774 12:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.033 12:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.292 12:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:07.292 12:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:07.551 true 00:39:07.551 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:07.551 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.810 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.069 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:08.069 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:08.069 true 00:39:08.069 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:08.069 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.328 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.586 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:08.586 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:08.845 true 00:39:08.845 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:08.845 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.103 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.362 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:09.362 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:09.362 true 00:39:09.362 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:09.362 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.621 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.880 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:09.880 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:10.139 true 00:39:10.139 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:10.139 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.398 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.657 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:10.657 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:10.657 true 00:39:10.657 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:10.657 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.916 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.175 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:11.175 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:11.434 true 00:39:11.434 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:11.434 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.692 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.951 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:11.951 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:11.951 true 00:39:11.951 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:11.951 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.210 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.469 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:12.469 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:12.728 true 00:39:12.728 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:12.728 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.987 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.245 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:13.245 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:13.245 true 00:39:13.245 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:13.245 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.504 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.762 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:13.762 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:14.021 true 00:39:14.021 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:14.021 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.279 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.538 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:14.538 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:14.538 true 00:39:14.538 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:14.538 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.797 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.055 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:15.055 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:15.314 true 00:39:15.314 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:15.314 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.573 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.831 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:15.832 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:15.832 true 00:39:15.832 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:15.832 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.090 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.349 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:16.349 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:16.608 true 00:39:16.608 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:16.608 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.867 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.126 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:17.126 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:17.126 true 00:39:17.126 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:17.126 12:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.384 12:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.643 12:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:17.643 12:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:17.901 true 00:39:17.901 12:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:17.901 12:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.160 12:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.422 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:18.422 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:18.422 true 00:39:18.422 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:18.422 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.683 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.941 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:18.941 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:19.200 true 00:39:19.200 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:19.200 12:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.459 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.718 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:19.718 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:19.718 true 00:39:19.718 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:19.718 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.976 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.235 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:20.235 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:20.494 true 00:39:20.494 12:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:20.494 12:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.752 12:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.011 12:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:21.011 12:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:21.011 true 00:39:21.011 12:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:21.011 12:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.270 12:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.528 12:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:21.528 12:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:21.786 true 00:39:21.786 12:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:21.786 12:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.044 12:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.303 12:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:22.303 12:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:22.303 true 00:39:22.303 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:22.303 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.561 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.819 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:22.819 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:23.078 true 00:39:23.078 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:23.078 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.337 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.595 12:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:23.595 12:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:23.595 true 00:39:23.595 12:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:23.595 12:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.854 12:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.113 12:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:24.113 12:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:24.371 true 00:39:24.371 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:24.371 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.629 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.888 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:24.888 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:24.888 true 00:39:24.888 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:24.888 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.147 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.405 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:25.405 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:25.664 true 00:39:25.664 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:25.664 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.923 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.182 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:26.182 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:26.182 true 00:39:26.182 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:26.182 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.441 12:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.698 12:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:26.698 12:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:26.956 true 00:39:26.956 12:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:26.956 12:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.215 12:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.473 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:27.473 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:27.473 Initializing NVMe Controllers 00:39:27.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:27.473 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:39:27.473 Controller IO queue size 128, less than required. 00:39:27.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:27.473 WARNING: Some requested NVMe devices were skipped 00:39:27.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:27.473 Initialization complete. Launching workers. 00:39:27.473 ======================================================== 00:39:27.473 Latency(us) 00:39:27.473 Device Information : IOPS MiB/s Average min max 00:39:27.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 24089.47 11.76 5313.31 1566.95 9699.23 00:39:27.473 ======================================================== 00:39:27.473 Total : 24089.47 11.76 5313.31 1566.95 9699.23 00:39:27.473 00:39:27.473 true 00:39:27.473 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3912979 00:39:27.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3912979) - No such process 00:39:27.473 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3912979 00:39:27.473 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.732 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:27.991 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:27.991 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:27.991 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:27.991 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:27.991 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:28.250 null0 00:39:28.250 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:28.250 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:28.250 12:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:28.250 null1 00:39:28.250 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:28.250 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:28.250 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:28.508 null2 00:39:28.508 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:28.508 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:28.508 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:28.766 null3 00:39:28.766 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:28.766 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:28.766 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:29.024 null4 00:39:29.024 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.024 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.024 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:29.024 null5 00:39:29.024 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.024 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.024 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:29.283 null6 00:39:29.283 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.283 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.283 12:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:29.542 null7 00:39:29.542 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.542 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.542 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:29.542 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3918428 3918431 3918433 3918436 3918439 3918443 3918445 3918448 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.543 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:29.803 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:30.061 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:30.061 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:30.061 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:30.061 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:30.061 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:30.061 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:30.061 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:30.061 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.320 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.320 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.580 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:30.839 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:31.098 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.099 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.357 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:31.358 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:31.358 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:31.358 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:31.358 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:31.358 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:31.358 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:31.358 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:31.616 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.876 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.135 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.135 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.135 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.135 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:32.135 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.135 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.135 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.135 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.394 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.395 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.395 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:32.395 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.654 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:32.913 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.913 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.913 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.913 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.913 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.913 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.913 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.913 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.173 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.432 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.690 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.690 rmmod nvme_tcp 00:39:33.691 rmmod nvme_fabrics 00:39:33.691 rmmod nvme_keyring 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3912461 ']' 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3912461 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3912461 ']' 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3912461 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3912461 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3912461' 00:39:33.691 killing process with pid 3912461 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3912461 00:39:33.691 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3912461 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.067 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.997 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:36.997 00:39:36.997 real 0m48.109s 00:39:36.997 user 3m1.577s 00:39:36.997 sys 0m20.985s 00:39:36.997 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:36.997 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:36.997 ************************************ 00:39:36.997 END TEST nvmf_ns_hotplug_stress 00:39:36.997 ************************************ 00:39:36.997 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:36.997 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:36.997 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:36.997 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:36.997 ************************************ 00:39:36.997 START TEST nvmf_delete_subsystem 00:39:36.997 ************************************ 00:39:36.998 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:36.998 * Looking for test storage... 00:39:37.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:37.287 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:37.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.288 --rc genhtml_branch_coverage=1 00:39:37.288 --rc genhtml_function_coverage=1 00:39:37.288 --rc genhtml_legend=1 00:39:37.288 --rc geninfo_all_blocks=1 00:39:37.288 --rc geninfo_unexecuted_blocks=1 00:39:37.288 00:39:37.288 ' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:37.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.288 --rc genhtml_branch_coverage=1 00:39:37.288 --rc genhtml_function_coverage=1 00:39:37.288 --rc genhtml_legend=1 00:39:37.288 --rc geninfo_all_blocks=1 00:39:37.288 --rc geninfo_unexecuted_blocks=1 00:39:37.288 00:39:37.288 ' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:37.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.288 --rc genhtml_branch_coverage=1 00:39:37.288 --rc genhtml_function_coverage=1 00:39:37.288 --rc genhtml_legend=1 00:39:37.288 --rc geninfo_all_blocks=1 00:39:37.288 --rc geninfo_unexecuted_blocks=1 00:39:37.288 00:39:37.288 ' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:37.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.288 --rc genhtml_branch_coverage=1 00:39:37.288 --rc genhtml_function_coverage=1 00:39:37.288 --rc genhtml_legend=1 00:39:37.288 --rc geninfo_all_blocks=1 00:39:37.288 --rc geninfo_unexecuted_blocks=1 00:39:37.288 00:39:37.288 ' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.288 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:42.577 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:42.577 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:42.577 Found net devices under 0000:af:00.0: cvl_0_0 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:42.577 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:42.578 Found net devices under 0000:af:00.1: cvl_0_1 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:42.578 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:42.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:42.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:39:42.578 00:39:42.578 --- 10.0.0.2 ping statistics --- 00:39:42.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:42.578 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:42.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:42.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:39:42.578 00:39:42.578 --- 10.0.0.1 ping statistics --- 00:39:42.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:42.578 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3922883 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3922883 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3922883 ']' 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:42.578 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:42.578 [2024-12-10 12:42:49.372003] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:42.578 [2024-12-10 12:42:49.374088] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:39:42.578 [2024-12-10 12:42:49.374156] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:42.837 [2024-12-10 12:42:49.490182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:42.837 [2024-12-10 12:42:49.598724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:42.837 [2024-12-10 12:42:49.598765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:42.837 [2024-12-10 12:42:49.598777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:42.837 [2024-12-10 12:42:49.598786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:42.837 [2024-12-10 12:42:49.598798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:42.837 [2024-12-10 12:42:49.600805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.837 [2024-12-10 12:42:49.600817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:43.096 [2024-12-10 12:42:49.913623] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:43.096 [2024-12-10 12:42:49.914238] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:43.096 [2024-12-10 12:42:49.914449] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.664 [2024-12-10 12:42:50.221790] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.664 [2024-12-10 12:42:50.242018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.664 NULL1 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.664 Delay0 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3922949 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:43.664 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:43.664 [2024-12-10 12:42:50.376360] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:45.564 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:45.564 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.564 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.822 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 [2024-12-10 12:42:52.419536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 starting I/O failed: -6 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 [2024-12-10 12:42:52.423125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 [2024-12-10 12:42:52.423876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Read completed with error (sct=0, sc=8) 00:39:45.823 Write completed with error (sct=0, sc=8) 00:39:45.824 Write completed with error (sct=0, sc=8) 00:39:45.824 Write completed with error (sct=0, sc=8) 00:39:45.824 Read completed with error (sct=0, sc=8) 00:39:45.824 Read completed with error (sct=0, sc=8) 00:39:45.824 [2024-12-10 12:42:52.424881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:39:46.757 [2024-12-10 12:42:53.392481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 [2024-12-10 12:42:53.423904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 [2024-12-10 12:42:53.424596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 [2024-12-10 12:42:53.425460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(6) to be set 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Read completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 Write completed with error (sct=0, sc=8) 00:39:46.757 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:46.757 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3922949 00:39:46.757 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:46.757 [2024-12-10 12:42:53.434498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:39:46.757 Initializing NVMe Controllers 00:39:46.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:46.757 Controller IO queue size 128, less than required. 00:39:46.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:46.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:46.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:46.757 Initialization complete. Launching workers. 00:39:46.757 ======================================================== 00:39:46.757 Latency(us) 00:39:46.757 Device Information : IOPS MiB/s Average min max 00:39:46.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.26 0.09 962545.05 1284.29 1045483.99 00:39:46.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.89 0.08 868812.61 637.49 1015253.29 00:39:46.757 ======================================================== 00:39:46.757 Total : 335.15 0.16 918386.66 637.49 1045483.99 00:39:46.757 00:39:46.757 [2024-12-10 12:42:53.436170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e080 (9): Bad file descriptor 00:39:46.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3922949 00:39:47.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3922949) - No such process 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3922949 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3922949 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3922949 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.324 [2024-12-10 12:42:53.962129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3923613 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3923613 00:39:47.324 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:47.324 [2024-12-10 12:42:54.065454] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:47.890 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:47.890 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3923613 00:39:47.890 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:48.454 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:48.454 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3923613 00:39:48.454 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:48.712 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:48.712 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3923613 00:39:48.712 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:49.278 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:49.278 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3923613 00:39:49.278 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:49.844 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:49.844 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3923613 00:39:49.844 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:50.410 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:50.410 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3923613 00:39:50.410 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:50.668 Initializing NVMe Controllers 00:39:50.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:50.668 Controller IO queue size 128, less than required. 00:39:50.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:50.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:50.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:50.668 Initialization complete. Launching workers. 00:39:50.668 ======================================================== 00:39:50.668 Latency(us) 00:39:50.668 Device Information : IOPS MiB/s Average min max 00:39:50.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004562.35 1000220.41 1046917.71 00:39:50.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005616.20 1000255.39 1041409.71 00:39:50.668 ======================================================== 00:39:50.668 Total : 256.00 0.12 1005089.28 1000220.41 1046917.71 00:39:50.668 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3923613 00:39:50.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3923613) - No such process 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3923613 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:50.927 rmmod nvme_tcp 00:39:50.927 rmmod nvme_fabrics 00:39:50.927 rmmod nvme_keyring 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3922883 ']' 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3922883 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3922883 ']' 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3922883 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922883 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922883' 00:39:50.927 killing process with pid 3922883 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3922883 00:39:50.927 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3922883 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:52.303 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:52.304 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.304 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:52.304 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:54.207 00:39:54.207 real 0m17.075s 00:39:54.207 user 0m27.150s 00:39:54.207 sys 0m5.820s 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:54.207 ************************************ 00:39:54.207 END TEST nvmf_delete_subsystem 00:39:54.207 ************************************ 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:54.207 ************************************ 00:39:54.207 START TEST nvmf_host_management 00:39:54.207 ************************************ 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:54.207 * Looking for test storage... 00:39:54.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:39:54.207 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:54.207 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:54.466 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:54.466 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:54.466 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:54.466 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:54.466 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:54.466 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.467 --rc genhtml_branch_coverage=1 00:39:54.467 --rc genhtml_function_coverage=1 00:39:54.467 --rc genhtml_legend=1 00:39:54.467 --rc geninfo_all_blocks=1 00:39:54.467 --rc geninfo_unexecuted_blocks=1 00:39:54.467 00:39:54.467 ' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.467 --rc genhtml_branch_coverage=1 00:39:54.467 --rc genhtml_function_coverage=1 00:39:54.467 --rc genhtml_legend=1 00:39:54.467 --rc geninfo_all_blocks=1 00:39:54.467 --rc geninfo_unexecuted_blocks=1 00:39:54.467 00:39:54.467 ' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.467 --rc genhtml_branch_coverage=1 00:39:54.467 --rc genhtml_function_coverage=1 00:39:54.467 --rc genhtml_legend=1 00:39:54.467 --rc geninfo_all_blocks=1 00:39:54.467 --rc geninfo_unexecuted_blocks=1 00:39:54.467 00:39:54.467 ' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:54.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.467 --rc genhtml_branch_coverage=1 00:39:54.467 --rc genhtml_function_coverage=1 00:39:54.467 --rc genhtml_legend=1 00:39:54.467 --rc geninfo_all_blocks=1 00:39:54.467 --rc geninfo_unexecuted_blocks=1 00:39:54.467 00:39:54.467 ' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:54.467 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:59.735 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:59.735 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:59.735 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:59.736 Found net devices under 0000:af:00.0: cvl_0_0 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:59.736 Found net devices under 0000:af:00.1: cvl_0_1 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:59.736 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:59.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:59.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:39:59.736 00:39:59.736 --- 10.0.0.2 ping statistics --- 00:39:59.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.736 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:59.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:59.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:39:59.736 00:39:59.736 --- 10.0.0.1 ping statistics --- 00:39:59.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.736 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3927582 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3927582 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3927582 ']' 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:59.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:59.736 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:59.736 [2024-12-10 12:43:06.333046] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:59.736 [2024-12-10 12:43:06.335154] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:39:59.736 [2024-12-10 12:43:06.335228] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:59.736 [2024-12-10 12:43:06.453442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:59.995 [2024-12-10 12:43:06.562744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:59.995 [2024-12-10 12:43:06.562781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:59.995 [2024-12-10 12:43:06.562793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:59.995 [2024-12-10 12:43:06.562803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:59.995 [2024-12-10 12:43:06.562812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:59.995 [2024-12-10 12:43:06.564994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:59.995 [2024-12-10 12:43:06.565067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:59.995 [2024-12-10 12:43:06.565906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.995 [2024-12-10 12:43:06.565921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:00.254 [2024-12-10 12:43:06.886446] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:00.254 [2024-12-10 12:43:06.888067] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:00.254 [2024-12-10 12:43:06.889968] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:00.254 [2024-12-10 12:43:06.890827] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:00.254 [2024-12-10 12:43:06.891123] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.513 [2024-12-10 12:43:07.178645] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.513 Malloc0 00:40:00.513 [2024-12-10 12:43:07.298936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:00.513 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3927795 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3927795 /var/tmp/bdevperf.sock 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3927795 ']' 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:00.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:00.772 { 00:40:00.772 "params": { 00:40:00.772 "name": "Nvme$subsystem", 00:40:00.772 "trtype": "$TEST_TRANSPORT", 00:40:00.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:00.772 "adrfam": "ipv4", 00:40:00.772 "trsvcid": "$NVMF_PORT", 00:40:00.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:00.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:00.772 "hdgst": ${hdgst:-false}, 00:40:00.772 "ddgst": ${ddgst:-false} 00:40:00.772 }, 00:40:00.772 "method": "bdev_nvme_attach_controller" 00:40:00.772 } 00:40:00.772 EOF 00:40:00.772 )") 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:00.772 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:00.772 "params": { 00:40:00.772 "name": "Nvme0", 00:40:00.772 "trtype": "tcp", 00:40:00.772 "traddr": "10.0.0.2", 00:40:00.772 "adrfam": "ipv4", 00:40:00.772 "trsvcid": "4420", 00:40:00.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:00.772 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:00.772 "hdgst": false, 00:40:00.772 "ddgst": false 00:40:00.772 }, 00:40:00.772 "method": "bdev_nvme_attach_controller" 00:40:00.772 }' 00:40:00.772 [2024-12-10 12:43:07.422303] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:40:00.772 [2024-12-10 12:43:07.422393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927795 ] 00:40:00.772 [2024-12-10 12:43:07.534979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:01.030 [2024-12-10 12:43:07.647680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.597 Running I/O for 10 seconds... 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:40:01.598 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=617 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 617 -ge 100 ']' 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.858 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.858 [2024-12-10 12:43:08.599495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.599983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.599992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.600004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.600014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.600025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.600035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.600045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.858 [2024-12-10 12:43:08.600055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.858 [2024-12-10 12:43:08.600066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.859 [2024-12-10 12:43:08.600893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.859 [2024-12-10 12:43:08.600902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.860 [2024-12-10 12:43:08.600940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:01.860 [2024-12-10 12:43:08.602195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:01.860 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.860 task offset: 90112 on job bdev=Nvme0n1 fails 00:40:01.860 00:40:01.860 Latency(us) 00:40:01.860 [2024-12-10T11:43:08.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:01.860 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.860 Job: Nvme0n1 ended in about 0.41 seconds with error 00:40:01.860 Verification LBA range: start 0x0 length 0x400 00:40:01.860 Nvme0n1 : 0.41 1728.63 108.04 157.15 0.00 32995.66 3604.48 32206.26 00:40:01.860 [2024-12-10T11:43:08.686Z] =================================================================================================================== 00:40:01.860 [2024-12-10T11:43:08.686Z] Total : 1728.63 108.04 157.15 0.00 32995.66 3604.48 32206.26 00:40:01.860 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:01.860 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.860 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.860 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.860 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:01.860 [2024-12-10 12:43:08.617799] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:01.860 [2024-12-10 12:43:08.617837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:40:02.118 [2024-12-10 12:43:08.751405] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:40:03.054 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3927795 00:40:03.054 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:03.054 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:03.054 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:03.054 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:03.054 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:03.054 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:03.054 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:03.054 { 00:40:03.054 "params": { 00:40:03.055 "name": "Nvme$subsystem", 00:40:03.055 "trtype": "$TEST_TRANSPORT", 00:40:03.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:03.055 "adrfam": "ipv4", 00:40:03.055 "trsvcid": "$NVMF_PORT", 00:40:03.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:03.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:03.055 "hdgst": ${hdgst:-false}, 00:40:03.055 "ddgst": ${ddgst:-false} 00:40:03.055 }, 00:40:03.055 "method": "bdev_nvme_attach_controller" 00:40:03.055 } 00:40:03.055 EOF 00:40:03.055 )") 00:40:03.055 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:03.055 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:03.055 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:03.055 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:03.055 "params": { 00:40:03.055 "name": "Nvme0", 00:40:03.055 "trtype": "tcp", 00:40:03.055 "traddr": "10.0.0.2", 00:40:03.055 "adrfam": "ipv4", 00:40:03.055 "trsvcid": "4420", 00:40:03.055 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.055 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:03.055 "hdgst": false, 00:40:03.055 "ddgst": false 00:40:03.055 }, 00:40:03.055 "method": "bdev_nvme_attach_controller" 00:40:03.055 }' 00:40:03.055 [2024-12-10 12:43:09.695023] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:40:03.055 [2024-12-10 12:43:09.695122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928258 ] 00:40:03.055 [2024-12-10 12:43:09.807242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:03.313 [2024-12-10 12:43:09.917843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.881 Running I/O for 1 seconds... 00:40:04.817 1792.00 IOPS, 112.00 MiB/s 00:40:04.817 Latency(us) 00:40:04.817 [2024-12-10T11:43:11.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:04.817 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:04.817 Verification LBA range: start 0x0 length 0x400 00:40:04.817 Nvme0n1 : 1.01 1833.74 114.61 0.00 0.00 34333.27 5710.99 30833.13 00:40:04.817 [2024-12-10T11:43:11.643Z] =================================================================================================================== 00:40:04.817 [2024-12-10T11:43:11.643Z] Total : 1833.74 114.61 0.00 0.00 34333.27 5710.99 30833.13 00:40:05.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3927795 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:05.754 rmmod nvme_tcp 00:40:05.754 rmmod nvme_fabrics 00:40:05.754 rmmod nvme_keyring 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3927582 ']' 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3927582 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3927582 ']' 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3927582 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:05.754 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3927582 00:40:06.013 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:06.013 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:06.013 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3927582' 00:40:06.013 killing process with pid 3927582 00:40:06.013 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3927582 00:40:06.013 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3927582 00:40:07.391 [2024-12-10 12:43:13.820139] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:07.391 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.295 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:09.295 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:09.295 00:40:09.295 real 0m15.097s 00:40:09.295 user 0m28.438s 00:40:09.295 sys 0m6.411s 00:40:09.295 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.295 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:09.295 ************************************ 00:40:09.295 END TEST nvmf_host_management 00:40:09.295 ************************************ 00:40:09.295 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:09.295 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:09.295 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:09.295 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:09.295 ************************************ 00:40:09.295 START TEST nvmf_lvol 00:40:09.295 ************************************ 00:40:09.295 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:09.296 * Looking for test storage... 00:40:09.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:09.555 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.555 --rc genhtml_branch_coverage=1 00:40:09.555 --rc genhtml_function_coverage=1 00:40:09.555 --rc genhtml_legend=1 00:40:09.555 --rc geninfo_all_blocks=1 00:40:09.555 --rc geninfo_unexecuted_blocks=1 00:40:09.555 00:40:09.555 ' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:09.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.556 --rc genhtml_branch_coverage=1 00:40:09.556 --rc genhtml_function_coverage=1 00:40:09.556 --rc genhtml_legend=1 00:40:09.556 --rc geninfo_all_blocks=1 00:40:09.556 --rc geninfo_unexecuted_blocks=1 00:40:09.556 00:40:09.556 ' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:09.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.556 --rc genhtml_branch_coverage=1 00:40:09.556 --rc genhtml_function_coverage=1 00:40:09.556 --rc genhtml_legend=1 00:40:09.556 --rc geninfo_all_blocks=1 00:40:09.556 --rc geninfo_unexecuted_blocks=1 00:40:09.556 00:40:09.556 ' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:09.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.556 --rc genhtml_branch_coverage=1 00:40:09.556 --rc genhtml_function_coverage=1 00:40:09.556 --rc genhtml_legend=1 00:40:09.556 --rc geninfo_all_blocks=1 00:40:09.556 --rc geninfo_unexecuted_blocks=1 00:40:09.556 00:40:09.556 ' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:09.556 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:14.828 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:14.828 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:14.828 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:14.828 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:14.828 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:14.828 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:14.828 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:14.828 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:14.829 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:14.829 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:14.829 Found net devices under 0000:af:00.0: cvl_0_0 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:14.829 Found net devices under 0000:af:00.1: cvl_0_1 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:14.829 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:15.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:15.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:40:15.089 00:40:15.089 --- 10.0.0.2 ping statistics --- 00:40:15.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.089 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:15.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:15.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:40:15.089 00:40:15.089 --- 10.0.0.1 ping statistics --- 00:40:15.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.089 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3932185 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3932185 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3932185 ']' 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:15.089 12:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:15.089 [2024-12-10 12:43:21.871059] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:15.089 [2024-12-10 12:43:21.873093] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:40:15.089 [2024-12-10 12:43:21.873161] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:15.348 [2024-12-10 12:43:21.991006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:15.348 [2024-12-10 12:43:22.099194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:15.348 [2024-12-10 12:43:22.099238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:15.348 [2024-12-10 12:43:22.099250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:15.348 [2024-12-10 12:43:22.099259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:15.348 [2024-12-10 12:43:22.099268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:15.348 [2024-12-10 12:43:22.101477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.348 [2024-12-10 12:43:22.101542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.348 [2024-12-10 12:43:22.101551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:15.607 [2024-12-10 12:43:22.418975] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:15.608 [2024-12-10 12:43:22.419894] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:15.608 [2024-12-10 12:43:22.420586] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:15.608 [2024-12-10 12:43:22.420796] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:15.866 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:15.866 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:15.866 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:15.866 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:15.866 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:16.125 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:16.125 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:16.125 [2024-12-10 12:43:22.878481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:16.125 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:16.384 12:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:16.384 12:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:16.643 12:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:16.643 12:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:16.902 12:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:17.161 12:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c30a1a25-bbed-41c7-ae21-54515b17ef4e 00:40:17.161 12:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c30a1a25-bbed-41c7-ae21-54515b17ef4e lvol 20 00:40:17.420 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2c6bf663-172f-443e-927b-be84741f0017 00:40:17.420 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:17.420 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2c6bf663-172f-443e-927b-be84741f0017 00:40:17.678 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:17.936 [2024-12-10 12:43:24.550484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:17.936 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:18.195 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3932664 00:40:18.195 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:18.195 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:19.131 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2c6bf663-172f-443e-927b-be84741f0017 MY_SNAPSHOT 00:40:19.397 12:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=35e8ad0b-10ff-40c6-a24e-6f12f0b894cf 00:40:19.397 12:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2c6bf663-172f-443e-927b-be84741f0017 30 00:40:19.660 12:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 35e8ad0b-10ff-40c6-a24e-6f12f0b894cf MY_CLONE 00:40:19.919 12:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=abc0d938-ea07-481f-a800-77b07b206e67 00:40:19.919 12:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate abc0d938-ea07-481f-a800-77b07b206e67 00:40:20.487 12:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3932664 00:40:28.606 Initializing NVMe Controllers 00:40:28.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:28.606 Controller IO queue size 128, less than required. 00:40:28.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:28.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:28.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:28.606 Initialization complete. Launching workers. 00:40:28.606 ======================================================== 00:40:28.606 Latency(us) 00:40:28.606 Device Information : IOPS MiB/s Average min max 00:40:28.606 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11533.60 45.05 11103.87 444.14 129961.08 00:40:28.606 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11241.10 43.91 11389.69 3440.94 140903.93 00:40:28.606 ======================================================== 00:40:28.606 Total : 22774.70 88.96 11244.94 444.14 140903.93 00:40:28.606 00:40:28.606 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:28.865 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2c6bf663-172f-443e-927b-be84741f0017 00:40:28.865 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c30a1a25-bbed-41c7-ae21-54515b17ef4e 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:29.124 rmmod nvme_tcp 00:40:29.124 rmmod nvme_fabrics 00:40:29.124 rmmod nvme_keyring 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3932185 ']' 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3932185 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3932185 ']' 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3932185 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3932185 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3932185' 00:40:29.124 killing process with pid 3932185 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3932185 00:40:29.124 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3932185 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:30.643 12:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:33.180 00:40:33.180 real 0m23.451s 00:40:33.180 user 0m57.805s 00:40:33.180 sys 0m9.265s 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:33.180 ************************************ 00:40:33.180 END TEST nvmf_lvol 00:40:33.180 ************************************ 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:33.180 ************************************ 00:40:33.180 START TEST nvmf_lvs_grow 00:40:33.180 ************************************ 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:33.180 * Looking for test storage... 00:40:33.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:33.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.180 --rc genhtml_branch_coverage=1 00:40:33.180 --rc genhtml_function_coverage=1 00:40:33.180 --rc genhtml_legend=1 00:40:33.180 --rc geninfo_all_blocks=1 00:40:33.180 --rc geninfo_unexecuted_blocks=1 00:40:33.180 00:40:33.180 ' 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:33.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.180 --rc genhtml_branch_coverage=1 00:40:33.180 --rc genhtml_function_coverage=1 00:40:33.180 --rc genhtml_legend=1 00:40:33.180 --rc geninfo_all_blocks=1 00:40:33.180 --rc geninfo_unexecuted_blocks=1 00:40:33.180 00:40:33.180 ' 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:33.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.180 --rc genhtml_branch_coverage=1 00:40:33.180 --rc genhtml_function_coverage=1 00:40:33.180 --rc genhtml_legend=1 00:40:33.180 --rc geninfo_all_blocks=1 00:40:33.180 --rc geninfo_unexecuted_blocks=1 00:40:33.180 00:40:33.180 ' 00:40:33.180 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:33.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.180 --rc genhtml_branch_coverage=1 00:40:33.180 --rc genhtml_function_coverage=1 00:40:33.180 --rc genhtml_legend=1 00:40:33.180 --rc geninfo_all_blocks=1 00:40:33.181 --rc geninfo_unexecuted_blocks=1 00:40:33.181 00:40:33.181 ' 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:33.181 12:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:38.449 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:38.449 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.449 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:38.450 Found net devices under 0000:af:00.0: cvl_0_0 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:38.450 Found net devices under 0000:af:00.1: cvl_0_1 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:38.450 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:38.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:38.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:40:38.450 00:40:38.450 --- 10.0.0.2 ping statistics --- 00:40:38.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.450 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:38.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:38.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:40:38.450 00:40:38.450 --- 10.0.0.1 ping statistics --- 00:40:38.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.450 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3938091 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3938091 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3938091 ']' 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:38.450 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:38.450 [2024-12-10 12:43:45.274511] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:38.708 [2024-12-10 12:43:45.276640] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:40:38.708 [2024-12-10 12:43:45.276708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:38.708 [2024-12-10 12:43:45.392364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.708 [2024-12-10 12:43:45.492597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:38.708 [2024-12-10 12:43:45.492641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:38.708 [2024-12-10 12:43:45.492653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:38.708 [2024-12-10 12:43:45.492662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:38.708 [2024-12-10 12:43:45.492671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:38.708 [2024-12-10 12:43:45.493964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.275 [2024-12-10 12:43:45.810823] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:39.275 [2024-12-10 12:43:45.811070] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:39.275 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:39.275 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:40:39.275 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:39.275 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:39.275 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:39.534 [2024-12-10 12:43:46.298698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:39.534 ************************************ 00:40:39.534 START TEST lvs_grow_clean 00:40:39.534 ************************************ 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:39.534 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:39.792 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:39.792 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:39.792 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:40.051 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=120b2d14-6524-40a2-988d-40277c7f293c 00:40:40.051 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:40.051 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:40.310 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:40.310 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:40.310 12:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 120b2d14-6524-40a2-988d-40277c7f293c lvol 150 00:40:40.569 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6deef88b-4449-48cd-b184-c439f0846227 00:40:40.569 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:40.569 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:40.569 [2024-12-10 12:43:47.346544] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:40.569 [2024-12-10 12:43:47.346647] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:40.569 true 00:40:40.569 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:40.569 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:40.828 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:40.828 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:41.086 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6deef88b-4449-48cd-b184-c439f0846227 00:40:41.345 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:41.345 [2024-12-10 12:43:48.123252] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:41.345 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3938614 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3938614 /var/tmp/bdevperf.sock 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3938614 ']' 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:41.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:41.604 12:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:41.604 [2024-12-10 12:43:48.408293] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:40:41.604 [2024-12-10 12:43:48.408382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938614 ] 00:40:41.863 [2024-12-10 12:43:48.520854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:41.863 [2024-12-10 12:43:48.632394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:42.431 12:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:42.431 12:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:40:42.431 12:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:42.999 Nvme0n1 00:40:42.999 12:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:42.999 [ 00:40:42.999 { 00:40:42.999 "name": "Nvme0n1", 00:40:42.999 "aliases": [ 00:40:42.999 "6deef88b-4449-48cd-b184-c439f0846227" 00:40:42.999 ], 00:40:42.999 "product_name": "NVMe disk", 00:40:42.999 "block_size": 4096, 00:40:42.999 "num_blocks": 38912, 00:40:42.999 "uuid": "6deef88b-4449-48cd-b184-c439f0846227", 00:40:42.999 "numa_id": 1, 00:40:42.999 "assigned_rate_limits": { 00:40:42.999 "rw_ios_per_sec": 0, 00:40:42.999 "rw_mbytes_per_sec": 0, 00:40:42.999 "r_mbytes_per_sec": 0, 00:40:42.999 "w_mbytes_per_sec": 0 00:40:42.999 }, 00:40:42.999 "claimed": false, 00:40:42.999 "zoned": false, 00:40:42.999 "supported_io_types": { 00:40:42.999 "read": true, 00:40:42.999 "write": true, 00:40:42.999 "unmap": true, 00:40:42.999 "flush": true, 00:40:42.999 "reset": true, 00:40:42.999 "nvme_admin": true, 00:40:42.999 "nvme_io": true, 00:40:42.999 "nvme_io_md": false, 00:40:42.999 "write_zeroes": true, 00:40:42.999 "zcopy": false, 00:40:42.999 "get_zone_info": false, 00:40:42.999 "zone_management": false, 00:40:42.999 "zone_append": false, 00:40:42.999 "compare": true, 00:40:42.999 "compare_and_write": true, 00:40:42.999 "abort": true, 00:40:42.999 "seek_hole": false, 00:40:42.999 "seek_data": false, 00:40:42.999 "copy": true, 00:40:42.999 "nvme_iov_md": false 00:40:42.999 }, 00:40:42.999 "memory_domains": [ 00:40:42.999 { 00:40:42.999 "dma_device_id": "system", 00:40:42.999 "dma_device_type": 1 00:40:42.999 } 00:40:42.999 ], 00:40:42.999 "driver_specific": { 00:40:42.999 "nvme": [ 00:40:42.999 { 00:40:42.999 "trid": { 00:40:42.999 "trtype": "TCP", 00:40:42.999 "adrfam": "IPv4", 00:40:42.999 "traddr": "10.0.0.2", 00:40:42.999 "trsvcid": "4420", 00:40:42.999 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:42.999 }, 00:40:42.999 "ctrlr_data": { 00:40:42.999 "cntlid": 1, 00:40:42.999 "vendor_id": "0x8086", 00:40:42.999 "model_number": "SPDK bdev Controller", 00:40:42.999 "serial_number": "SPDK0", 00:40:42.999 "firmware_revision": "25.01", 00:40:42.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:42.999 "oacs": { 00:40:42.999 "security": 0, 00:40:42.999 "format": 0, 00:40:42.999 "firmware": 0, 00:40:42.999 "ns_manage": 0 00:40:42.999 }, 00:40:42.999 "multi_ctrlr": true, 00:40:42.999 "ana_reporting": false 00:40:42.999 }, 00:40:42.999 "vs": { 00:40:42.999 "nvme_version": "1.3" 00:40:42.999 }, 00:40:42.999 "ns_data": { 00:40:42.999 "id": 1, 00:40:42.999 "can_share": true 00:40:42.999 } 00:40:42.999 } 00:40:42.999 ], 00:40:42.999 "mp_policy": "active_passive" 00:40:42.999 } 00:40:42.999 } 00:40:42.999 ] 00:40:42.999 12:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3938850 00:40:42.999 12:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:42.999 12:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:43.258 Running I/O for 10 seconds... 00:40:44.195 Latency(us) 00:40:44.195 [2024-12-10T11:43:51.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:44.195 Nvme0n1 : 1.00 19558.00 76.40 0.00 0.00 0.00 0.00 0.00 00:40:44.195 [2024-12-10T11:43:51.021Z] =================================================================================================================== 00:40:44.195 [2024-12-10T11:43:51.021Z] Total : 19558.00 76.40 0.00 0.00 0.00 0.00 0.00 00:40:44.195 00:40:45.129 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:45.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:45.129 Nvme0n1 : 2.00 20002.50 78.13 0.00 0.00 0.00 0.00 0.00 00:40:45.129 [2024-12-10T11:43:51.955Z] =================================================================================================================== 00:40:45.129 [2024-12-10T11:43:51.955Z] Total : 20002.50 78.13 0.00 0.00 0.00 0.00 0.00 00:40:45.129 00:40:45.129 true 00:40:45.388 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:45.388 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:45.388 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:45.388 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:45.388 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3938850 00:40:46.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:46.325 Nvme0n1 : 3.00 20150.67 78.71 0.00 0.00 0.00 0.00 0.00 00:40:46.325 [2024-12-10T11:43:53.151Z] =================================================================================================================== 00:40:46.325 [2024-12-10T11:43:53.151Z] Total : 20150.67 78.71 0.00 0.00 0.00 0.00 0.00 00:40:46.325 00:40:47.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:47.261 Nvme0n1 : 4.00 20256.50 79.13 0.00 0.00 0.00 0.00 0.00 00:40:47.261 [2024-12-10T11:43:54.087Z] =================================================================================================================== 00:40:47.261 [2024-12-10T11:43:54.087Z] Total : 20256.50 79.13 0.00 0.00 0.00 0.00 0.00 00:40:47.261 00:40:48.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:48.198 Nvme0n1 : 5.00 20326.80 79.40 0.00 0.00 0.00 0.00 0.00 00:40:48.198 [2024-12-10T11:43:55.024Z] =================================================================================================================== 00:40:48.198 [2024-12-10T11:43:55.024Z] Total : 20326.80 79.40 0.00 0.00 0.00 0.00 0.00 00:40:48.198 00:40:49.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:49.134 Nvme0n1 : 6.00 20389.17 79.65 0.00 0.00 0.00 0.00 0.00 00:40:49.134 [2024-12-10T11:43:55.960Z] =================================================================================================================== 00:40:49.134 [2024-12-10T11:43:55.960Z] Total : 20389.17 79.65 0.00 0.00 0.00 0.00 0.00 00:40:49.134 00:40:50.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:50.070 Nvme0n1 : 7.00 20433.71 79.82 0.00 0.00 0.00 0.00 0.00 00:40:50.070 [2024-12-10T11:43:56.896Z] =================================================================================================================== 00:40:50.070 [2024-12-10T11:43:56.896Z] Total : 20433.71 79.82 0.00 0.00 0.00 0.00 0.00 00:40:50.070 00:40:51.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:51.448 Nvme0n1 : 8.00 20459.25 79.92 0.00 0.00 0.00 0.00 0.00 00:40:51.448 [2024-12-10T11:43:58.274Z] =================================================================================================================== 00:40:51.448 [2024-12-10T11:43:58.274Z] Total : 20459.25 79.92 0.00 0.00 0.00 0.00 0.00 00:40:51.448 00:40:52.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:52.384 Nvme0n1 : 9.00 20482.78 80.01 0.00 0.00 0.00 0.00 0.00 00:40:52.384 [2024-12-10T11:43:59.210Z] =================================================================================================================== 00:40:52.384 [2024-12-10T11:43:59.210Z] Total : 20482.78 80.01 0.00 0.00 0.00 0.00 0.00 00:40:52.384 00:40:53.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.321 Nvme0n1 : 10.00 20504.60 80.10 0.00 0.00 0.00 0.00 0.00 00:40:53.321 [2024-12-10T11:44:00.147Z] =================================================================================================================== 00:40:53.321 [2024-12-10T11:44:00.147Z] Total : 20504.60 80.10 0.00 0.00 0.00 0.00 0.00 00:40:53.321 00:40:53.321 00:40:53.321 Latency(us) 00:40:53.321 [2024-12-10T11:44:00.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:53.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.321 Nvme0n1 : 10.00 20508.60 80.11 0.00 0.00 6238.09 3651.29 19099.06 00:40:53.321 [2024-12-10T11:44:00.147Z] =================================================================================================================== 00:40:53.321 [2024-12-10T11:44:00.147Z] Total : 20508.60 80.11 0.00 0.00 6238.09 3651.29 19099.06 00:40:53.321 { 00:40:53.321 "results": [ 00:40:53.321 { 00:40:53.321 "job": "Nvme0n1", 00:40:53.321 "core_mask": "0x2", 00:40:53.321 "workload": "randwrite", 00:40:53.321 "status": "finished", 00:40:53.321 "queue_depth": 128, 00:40:53.321 "io_size": 4096, 00:40:53.321 "runtime": 10.004289, 00:40:53.321 "iops": 20508.60385980453, 00:40:53.321 "mibps": 80.11173382736145, 00:40:53.321 "io_failed": 0, 00:40:53.321 "io_timeout": 0, 00:40:53.321 "avg_latency_us": 6238.085364905141, 00:40:53.321 "min_latency_us": 3651.2914285714287, 00:40:53.321 "max_latency_us": 19099.062857142857 00:40:53.321 } 00:40:53.321 ], 00:40:53.321 "core_count": 1 00:40:53.321 } 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3938614 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3938614 ']' 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3938614 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3938614 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3938614' 00:40:53.321 killing process with pid 3938614 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3938614 00:40:53.321 Received shutdown signal, test time was about 10.000000 seconds 00:40:53.321 00:40:53.321 Latency(us) 00:40:53.321 [2024-12-10T11:44:00.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:53.321 [2024-12-10T11:44:00.147Z] =================================================================================================================== 00:40:53.321 [2024-12-10T11:44:00.147Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:53.321 12:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3938614 00:40:54.258 12:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:54.258 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:54.516 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:54.516 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:54.774 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:54.774 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:54.774 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:55.032 [2024-12-10 12:44:01.610691] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:55.032 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:55.032 request: 00:40:55.032 { 00:40:55.032 "uuid": "120b2d14-6524-40a2-988d-40277c7f293c", 00:40:55.032 "method": "bdev_lvol_get_lvstores", 00:40:55.032 "req_id": 1 00:40:55.032 } 00:40:55.032 Got JSON-RPC error response 00:40:55.032 response: 00:40:55.032 { 00:40:55.032 "code": -19, 00:40:55.032 "message": "No such device" 00:40:55.032 } 00:40:55.291 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:55.291 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:55.291 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:55.291 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:55.291 12:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:55.291 aio_bdev 00:40:55.291 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6deef88b-4449-48cd-b184-c439f0846227 00:40:55.291 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6deef88b-4449-48cd-b184-c439f0846227 00:40:55.291 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:55.291 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:55.291 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:55.291 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:55.291 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:55.549 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6deef88b-4449-48cd-b184-c439f0846227 -t 2000 00:40:55.809 [ 00:40:55.809 { 00:40:55.809 "name": "6deef88b-4449-48cd-b184-c439f0846227", 00:40:55.809 "aliases": [ 00:40:55.809 "lvs/lvol" 00:40:55.809 ], 00:40:55.809 "product_name": "Logical Volume", 00:40:55.809 "block_size": 4096, 00:40:55.809 "num_blocks": 38912, 00:40:55.809 "uuid": "6deef88b-4449-48cd-b184-c439f0846227", 00:40:55.809 "assigned_rate_limits": { 00:40:55.809 "rw_ios_per_sec": 0, 00:40:55.809 "rw_mbytes_per_sec": 0, 00:40:55.809 "r_mbytes_per_sec": 0, 00:40:55.809 "w_mbytes_per_sec": 0 00:40:55.809 }, 00:40:55.809 "claimed": false, 00:40:55.809 "zoned": false, 00:40:55.809 "supported_io_types": { 00:40:55.809 "read": true, 00:40:55.809 "write": true, 00:40:55.809 "unmap": true, 00:40:55.809 "flush": false, 00:40:55.809 "reset": true, 00:40:55.809 "nvme_admin": false, 00:40:55.809 "nvme_io": false, 00:40:55.809 "nvme_io_md": false, 00:40:55.809 "write_zeroes": true, 00:40:55.809 "zcopy": false, 00:40:55.809 "get_zone_info": false, 00:40:55.809 "zone_management": false, 00:40:55.809 "zone_append": false, 00:40:55.809 "compare": false, 00:40:55.809 "compare_and_write": false, 00:40:55.809 "abort": false, 00:40:55.809 "seek_hole": true, 00:40:55.809 "seek_data": true, 00:40:55.809 "copy": false, 00:40:55.809 "nvme_iov_md": false 00:40:55.809 }, 00:40:55.809 "driver_specific": { 00:40:55.809 "lvol": { 00:40:55.809 "lvol_store_uuid": "120b2d14-6524-40a2-988d-40277c7f293c", 00:40:55.809 "base_bdev": "aio_bdev", 00:40:55.809 "thin_provision": false, 00:40:55.809 "num_allocated_clusters": 38, 00:40:55.809 "snapshot": false, 00:40:55.809 "clone": false, 00:40:55.809 "esnap_clone": false 00:40:55.809 } 00:40:55.809 } 00:40:55.809 } 00:40:55.809 ] 00:40:55.809 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:55.809 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:55.809 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:56.068 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:56.068 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:56.068 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:56.068 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:56.068 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6deef88b-4449-48cd-b184-c439f0846227 00:40:56.326 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 120b2d14-6524-40a2-988d-40277c7f293c 00:40:56.585 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:56.844 00:40:56.844 real 0m17.111s 00:40:56.844 user 0m16.693s 00:40:56.844 sys 0m1.574s 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:56.844 ************************************ 00:40:56.844 END TEST lvs_grow_clean 00:40:56.844 ************************************ 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:56.844 ************************************ 00:40:56.844 START TEST lvs_grow_dirty 00:40:56.844 ************************************ 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:56.844 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:57.102 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:57.102 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:57.360 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=82a8078a-e7af-451a-93e5-efaaaa5bad44 00:40:57.360 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:40:57.360 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:57.360 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:57.360 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:57.360 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 lvol 150 00:40:57.619 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4d5684b-daca-44be-a73d-e0db963f29ef 00:40:57.619 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:57.619 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:57.878 [2024-12-10 12:44:04.494515] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:57.878 [2024-12-10 12:44:04.494615] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:57.878 true 00:40:57.878 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:40:57.878 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:58.136 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:58.136 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:58.136 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4d5684b-daca-44be-a73d-e0db963f29ef 00:40:58.395 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:58.653 [2024-12-10 12:44:05.247229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3941351 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3941351 /var/tmp/bdevperf.sock 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3941351 ']' 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:58.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:58.653 12:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:58.912 [2024-12-10 12:44:05.530248] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:40:58.912 [2024-12-10 12:44:05.530337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3941351 ] 00:40:58.912 [2024-12-10 12:44:05.642382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.170 [2024-12-10 12:44:05.751470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:59.737 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:59.737 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:59.738 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:59.996 Nvme0n1 00:40:59.996 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:59.996 [ 00:40:59.996 { 00:40:59.996 "name": "Nvme0n1", 00:40:59.996 "aliases": [ 00:40:59.996 "a4d5684b-daca-44be-a73d-e0db963f29ef" 00:40:59.996 ], 00:40:59.996 "product_name": "NVMe disk", 00:40:59.996 "block_size": 4096, 00:40:59.996 "num_blocks": 38912, 00:40:59.996 "uuid": "a4d5684b-daca-44be-a73d-e0db963f29ef", 00:40:59.996 "numa_id": 1, 00:40:59.996 "assigned_rate_limits": { 00:40:59.996 "rw_ios_per_sec": 0, 00:40:59.996 "rw_mbytes_per_sec": 0, 00:40:59.996 "r_mbytes_per_sec": 0, 00:40:59.996 "w_mbytes_per_sec": 0 00:40:59.996 }, 00:40:59.996 "claimed": false, 00:40:59.996 "zoned": false, 00:40:59.996 "supported_io_types": { 00:40:59.996 "read": true, 00:40:59.996 "write": true, 00:40:59.996 "unmap": true, 00:40:59.996 "flush": true, 00:40:59.996 "reset": true, 00:40:59.996 "nvme_admin": true, 00:40:59.996 "nvme_io": true, 00:40:59.996 "nvme_io_md": false, 00:40:59.996 "write_zeroes": true, 00:40:59.996 "zcopy": false, 00:40:59.996 "get_zone_info": false, 00:40:59.996 "zone_management": false, 00:40:59.996 "zone_append": false, 00:40:59.996 "compare": true, 00:40:59.996 "compare_and_write": true, 00:40:59.996 "abort": true, 00:40:59.996 "seek_hole": false, 00:40:59.996 "seek_data": false, 00:40:59.996 "copy": true, 00:40:59.996 "nvme_iov_md": false 00:40:59.996 }, 00:40:59.997 "memory_domains": [ 00:40:59.997 { 00:40:59.997 "dma_device_id": "system", 00:40:59.997 "dma_device_type": 1 00:40:59.997 } 00:40:59.997 ], 00:40:59.997 "driver_specific": { 00:40:59.997 "nvme": [ 00:40:59.997 { 00:40:59.997 "trid": { 00:40:59.997 "trtype": "TCP", 00:40:59.997 "adrfam": "IPv4", 00:40:59.997 "traddr": "10.0.0.2", 00:40:59.997 "trsvcid": "4420", 00:40:59.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:59.997 }, 00:40:59.997 "ctrlr_data": { 00:40:59.997 "cntlid": 1, 00:40:59.997 "vendor_id": "0x8086", 00:40:59.997 "model_number": "SPDK bdev Controller", 00:40:59.997 "serial_number": "SPDK0", 00:40:59.997 "firmware_revision": "25.01", 00:40:59.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:59.997 "oacs": { 00:40:59.997 "security": 0, 00:40:59.997 "format": 0, 00:40:59.997 "firmware": 0, 00:40:59.997 "ns_manage": 0 00:40:59.997 }, 00:40:59.997 "multi_ctrlr": true, 00:40:59.997 "ana_reporting": false 00:40:59.997 }, 00:40:59.997 "vs": { 00:40:59.997 "nvme_version": "1.3" 00:40:59.997 }, 00:40:59.997 "ns_data": { 00:40:59.997 "id": 1, 00:40:59.997 "can_share": true 00:40:59.997 } 00:40:59.997 } 00:40:59.997 ], 00:40:59.997 "mp_policy": "active_passive" 00:40:59.997 } 00:40:59.997 } 00:40:59.997 ] 00:40:59.997 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3941582 00:40:59.997 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:59.997 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:00.256 Running I/O for 10 seconds... 00:41:01.192 Latency(us) 00:41:01.192 [2024-12-10T11:44:08.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:01.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:01.192 Nvme0n1 : 1.00 20083.00 78.45 0.00 0.00 0.00 0.00 0.00 00:41:01.192 [2024-12-10T11:44:08.018Z] =================================================================================================================== 00:41:01.192 [2024-12-10T11:44:08.018Z] Total : 20083.00 78.45 0.00 0.00 0.00 0.00 0.00 00:41:01.192 00:41:02.129 12:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:02.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.129 Nvme0n1 : 2.00 20265.00 79.16 0.00 0.00 0.00 0.00 0.00 00:41:02.129 [2024-12-10T11:44:08.955Z] =================================================================================================================== 00:41:02.129 [2024-12-10T11:44:08.955Z] Total : 20265.00 79.16 0.00 0.00 0.00 0.00 0.00 00:41:02.129 00:41:02.387 true 00:41:02.388 12:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:02.388 12:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:02.388 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:02.388 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:02.388 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3941582 00:41:03.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:03.324 Nvme0n1 : 3.00 20325.67 79.40 0.00 0.00 0.00 0.00 0.00 00:41:03.324 [2024-12-10T11:44:10.150Z] =================================================================================================================== 00:41:03.324 [2024-12-10T11:44:10.150Z] Total : 20325.67 79.40 0.00 0.00 0.00 0.00 0.00 00:41:03.324 00:41:04.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:04.260 Nvme0n1 : 4.00 20260.75 79.14 0.00 0.00 0.00 0.00 0.00 00:41:04.260 [2024-12-10T11:44:11.086Z] =================================================================================================================== 00:41:04.260 [2024-12-10T11:44:11.086Z] Total : 20260.75 79.14 0.00 0.00 0.00 0.00 0.00 00:41:04.260 00:41:05.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:05.196 Nvme0n1 : 5.00 20323.40 79.39 0.00 0.00 0.00 0.00 0.00 00:41:05.196 [2024-12-10T11:44:12.022Z] =================================================================================================================== 00:41:05.196 [2024-12-10T11:44:12.022Z] Total : 20323.40 79.39 0.00 0.00 0.00 0.00 0.00 00:41:05.196 00:41:06.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:06.133 Nvme0n1 : 6.00 20386.33 79.63 0.00 0.00 0.00 0.00 0.00 00:41:06.133 [2024-12-10T11:44:12.959Z] =================================================================================================================== 00:41:06.133 [2024-12-10T11:44:12.959Z] Total : 20386.33 79.63 0.00 0.00 0.00 0.00 0.00 00:41:06.133 00:41:07.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:07.069 Nvme0n1 : 7.00 20431.29 79.81 0.00 0.00 0.00 0.00 0.00 00:41:07.069 [2024-12-10T11:44:13.895Z] =================================================================================================================== 00:41:07.069 [2024-12-10T11:44:13.895Z] Total : 20431.29 79.81 0.00 0.00 0.00 0.00 0.00 00:41:07.069 00:41:08.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:08.447 Nvme0n1 : 8.00 20465.00 79.94 0.00 0.00 0.00 0.00 0.00 00:41:08.447 [2024-12-10T11:44:15.273Z] =================================================================================================================== 00:41:08.447 [2024-12-10T11:44:15.273Z] Total : 20465.00 79.94 0.00 0.00 0.00 0.00 0.00 00:41:08.447 00:41:09.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.382 Nvme0n1 : 9.00 20491.22 80.04 0.00 0.00 0.00 0.00 0.00 00:41:09.382 [2024-12-10T11:44:16.208Z] =================================================================================================================== 00:41:09.382 [2024-12-10T11:44:16.208Z] Total : 20491.22 80.04 0.00 0.00 0.00 0.00 0.00 00:41:09.382 00:41:10.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.319 Nvme0n1 : 10.00 20513.90 80.13 0.00 0.00 0.00 0.00 0.00 00:41:10.319 [2024-12-10T11:44:17.145Z] =================================================================================================================== 00:41:10.319 [2024-12-10T11:44:17.145Z] Total : 20513.90 80.13 0.00 0.00 0.00 0.00 0.00 00:41:10.319 00:41:10.319 00:41:10.319 Latency(us) 00:41:10.319 [2024-12-10T11:44:17.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.319 Nvme0n1 : 10.01 20519.88 80.16 0.00 0.00 6234.08 3588.88 18350.08 00:41:10.319 [2024-12-10T11:44:17.145Z] =================================================================================================================== 00:41:10.319 [2024-12-10T11:44:17.145Z] Total : 20519.88 80.16 0.00 0.00 6234.08 3588.88 18350.08 00:41:10.319 { 00:41:10.319 "results": [ 00:41:10.319 { 00:41:10.319 "job": "Nvme0n1", 00:41:10.319 "core_mask": "0x2", 00:41:10.319 "workload": "randwrite", 00:41:10.319 "status": "finished", 00:41:10.319 "queue_depth": 128, 00:41:10.319 "io_size": 4096, 00:41:10.319 "runtime": 10.008686, 00:41:10.319 "iops": 20519.876435328275, 00:41:10.319 "mibps": 80.15576732550107, 00:41:10.319 "io_failed": 0, 00:41:10.319 "io_timeout": 0, 00:41:10.319 "avg_latency_us": 6234.084654186482, 00:41:10.319 "min_latency_us": 3588.8761904761905, 00:41:10.319 "max_latency_us": 18350.08 00:41:10.319 } 00:41:10.319 ], 00:41:10.319 "core_count": 1 00:41:10.319 } 00:41:10.319 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3941351 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3941351 ']' 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3941351 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3941351 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3941351' 00:41:10.320 killing process with pid 3941351 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3941351 00:41:10.320 Received shutdown signal, test time was about 10.000000 seconds 00:41:10.320 00:41:10.320 Latency(us) 00:41:10.320 [2024-12-10T11:44:17.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.320 [2024-12-10T11:44:17.146Z] =================================================================================================================== 00:41:10.320 [2024-12-10T11:44:17.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:10.320 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3941351 00:41:11.256 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:11.256 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:11.515 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:11.515 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3938091 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3938091 00:41:11.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3938091 Killed "${NVMF_APP[@]}" "$@" 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3943373 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3943373 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3943373 ']' 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:11.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:11.774 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:12.033 [2024-12-10 12:44:18.607084] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:12.033 [2024-12-10 12:44:18.609202] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:41:12.033 [2024-12-10 12:44:18.609270] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:12.033 [2024-12-10 12:44:18.748783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.033 [2024-12-10 12:44:18.847318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:12.033 [2024-12-10 12:44:18.847362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:12.033 [2024-12-10 12:44:18.847385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:12.033 [2024-12-10 12:44:18.847395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:12.033 [2024-12-10 12:44:18.847404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:12.033 [2024-12-10 12:44:18.848728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:12.599 [2024-12-10 12:44:19.164228] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:12.599 [2024-12-10 12:44:19.164471] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:12.599 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:12.599 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:12.599 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:12.599 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:12.599 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:12.858 [2024-12-10 12:44:19.628605] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:12.858 [2024-12-10 12:44:19.628811] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:12.858 [2024-12-10 12:44:19.628876] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a4d5684b-daca-44be-a73d-e0db963f29ef 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a4d5684b-daca-44be-a73d-e0db963f29ef 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:12.858 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:13.117 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4d5684b-daca-44be-a73d-e0db963f29ef -t 2000 00:41:13.375 [ 00:41:13.375 { 00:41:13.375 "name": "a4d5684b-daca-44be-a73d-e0db963f29ef", 00:41:13.375 "aliases": [ 00:41:13.375 "lvs/lvol" 00:41:13.375 ], 00:41:13.375 "product_name": "Logical Volume", 00:41:13.375 "block_size": 4096, 00:41:13.375 "num_blocks": 38912, 00:41:13.375 "uuid": "a4d5684b-daca-44be-a73d-e0db963f29ef", 00:41:13.375 "assigned_rate_limits": { 00:41:13.375 "rw_ios_per_sec": 0, 00:41:13.375 "rw_mbytes_per_sec": 0, 00:41:13.375 "r_mbytes_per_sec": 0, 00:41:13.375 "w_mbytes_per_sec": 0 00:41:13.375 }, 00:41:13.376 "claimed": false, 00:41:13.376 "zoned": false, 00:41:13.376 "supported_io_types": { 00:41:13.376 "read": true, 00:41:13.376 "write": true, 00:41:13.376 "unmap": true, 00:41:13.376 "flush": false, 00:41:13.376 "reset": true, 00:41:13.376 "nvme_admin": false, 00:41:13.376 "nvme_io": false, 00:41:13.376 "nvme_io_md": false, 00:41:13.376 "write_zeroes": true, 00:41:13.376 "zcopy": false, 00:41:13.376 "get_zone_info": false, 00:41:13.376 "zone_management": false, 00:41:13.376 "zone_append": false, 00:41:13.376 "compare": false, 00:41:13.376 "compare_and_write": false, 00:41:13.376 "abort": false, 00:41:13.376 "seek_hole": true, 00:41:13.376 "seek_data": true, 00:41:13.376 "copy": false, 00:41:13.376 "nvme_iov_md": false 00:41:13.376 }, 00:41:13.376 "driver_specific": { 00:41:13.376 "lvol": { 00:41:13.376 "lvol_store_uuid": "82a8078a-e7af-451a-93e5-efaaaa5bad44", 00:41:13.376 "base_bdev": "aio_bdev", 00:41:13.376 "thin_provision": false, 00:41:13.376 "num_allocated_clusters": 38, 00:41:13.376 "snapshot": false, 00:41:13.376 "clone": false, 00:41:13.376 "esnap_clone": false 00:41:13.376 } 00:41:13.376 } 00:41:13.376 } 00:41:13.376 ] 00:41:13.376 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:13.376 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:13.376 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:13.634 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:13.634 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:13.634 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:13.634 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:13.634 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:13.892 [2024-12-10 12:44:20.609536] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:13.892 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:14.151 request: 00:41:14.151 { 00:41:14.151 "uuid": "82a8078a-e7af-451a-93e5-efaaaa5bad44", 00:41:14.151 "method": "bdev_lvol_get_lvstores", 00:41:14.151 "req_id": 1 00:41:14.151 } 00:41:14.151 Got JSON-RPC error response 00:41:14.151 response: 00:41:14.151 { 00:41:14.151 "code": -19, 00:41:14.151 "message": "No such device" 00:41:14.151 } 00:41:14.151 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:41:14.151 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:14.151 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:14.151 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:14.151 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:14.418 aio_bdev 00:41:14.418 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a4d5684b-daca-44be-a73d-e0db963f29ef 00:41:14.419 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a4d5684b-daca-44be-a73d-e0db963f29ef 00:41:14.419 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:14.419 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:14.419 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:14.419 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:14.419 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:14.682 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4d5684b-daca-44be-a73d-e0db963f29ef -t 2000 00:41:14.682 [ 00:41:14.682 { 00:41:14.682 "name": "a4d5684b-daca-44be-a73d-e0db963f29ef", 00:41:14.682 "aliases": [ 00:41:14.682 "lvs/lvol" 00:41:14.682 ], 00:41:14.682 "product_name": "Logical Volume", 00:41:14.682 "block_size": 4096, 00:41:14.682 "num_blocks": 38912, 00:41:14.682 "uuid": "a4d5684b-daca-44be-a73d-e0db963f29ef", 00:41:14.682 "assigned_rate_limits": { 00:41:14.682 "rw_ios_per_sec": 0, 00:41:14.682 "rw_mbytes_per_sec": 0, 00:41:14.682 "r_mbytes_per_sec": 0, 00:41:14.682 "w_mbytes_per_sec": 0 00:41:14.682 }, 00:41:14.682 "claimed": false, 00:41:14.682 "zoned": false, 00:41:14.682 "supported_io_types": { 00:41:14.682 "read": true, 00:41:14.682 "write": true, 00:41:14.682 "unmap": true, 00:41:14.682 "flush": false, 00:41:14.682 "reset": true, 00:41:14.682 "nvme_admin": false, 00:41:14.682 "nvme_io": false, 00:41:14.682 "nvme_io_md": false, 00:41:14.682 "write_zeroes": true, 00:41:14.682 "zcopy": false, 00:41:14.682 "get_zone_info": false, 00:41:14.682 "zone_management": false, 00:41:14.682 "zone_append": false, 00:41:14.682 "compare": false, 00:41:14.682 "compare_and_write": false, 00:41:14.682 "abort": false, 00:41:14.682 "seek_hole": true, 00:41:14.682 "seek_data": true, 00:41:14.682 "copy": false, 00:41:14.682 "nvme_iov_md": false 00:41:14.682 }, 00:41:14.682 "driver_specific": { 00:41:14.682 "lvol": { 00:41:14.682 "lvol_store_uuid": "82a8078a-e7af-451a-93e5-efaaaa5bad44", 00:41:14.682 "base_bdev": "aio_bdev", 00:41:14.682 "thin_provision": false, 00:41:14.682 "num_allocated_clusters": 38, 00:41:14.682 "snapshot": false, 00:41:14.682 "clone": false, 00:41:14.682 "esnap_clone": false 00:41:14.682 } 00:41:14.682 } 00:41:14.682 } 00:41:14.682 ] 00:41:14.682 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:14.682 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:14.682 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:14.988 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:14.988 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:14.988 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:15.299 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:15.299 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4d5684b-daca-44be-a73d-e0db963f29ef 00:41:15.299 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82a8078a-e7af-451a-93e5-efaaaa5bad44 00:41:15.592 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:15.850 00:41:15.850 real 0m18.982s 00:41:15.850 user 0m36.299s 00:41:15.850 sys 0m3.948s 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:15.850 ************************************ 00:41:15.850 END TEST lvs_grow_dirty 00:41:15.850 ************************************ 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:15.850 nvmf_trace.0 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:15.850 rmmod nvme_tcp 00:41:15.850 rmmod nvme_fabrics 00:41:15.850 rmmod nvme_keyring 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3943373 ']' 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3943373 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3943373 ']' 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3943373 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:15.850 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3943373 00:41:16.109 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:16.109 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:16.109 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3943373' 00:41:16.109 killing process with pid 3943373 00:41:16.109 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3943373 00:41:16.109 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3943373 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:17.046 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.582 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:19.582 00:41:19.582 real 0m46.319s 00:41:19.582 user 0m56.669s 00:41:19.582 sys 0m10.178s 00:41:19.583 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:19.583 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:19.583 ************************************ 00:41:19.583 END TEST nvmf_lvs_grow 00:41:19.583 ************************************ 00:41:19.583 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:19.583 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:19.583 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:19.583 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:19.583 ************************************ 00:41:19.583 START TEST nvmf_bdev_io_wait 00:41:19.583 ************************************ 00:41:19.583 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:19.583 * Looking for test storage... 00:41:19.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:19.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.583 --rc genhtml_branch_coverage=1 00:41:19.583 --rc genhtml_function_coverage=1 00:41:19.583 --rc genhtml_legend=1 00:41:19.583 --rc geninfo_all_blocks=1 00:41:19.583 --rc geninfo_unexecuted_blocks=1 00:41:19.583 00:41:19.583 ' 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:19.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.583 --rc genhtml_branch_coverage=1 00:41:19.583 --rc genhtml_function_coverage=1 00:41:19.583 --rc genhtml_legend=1 00:41:19.583 --rc geninfo_all_blocks=1 00:41:19.583 --rc geninfo_unexecuted_blocks=1 00:41:19.583 00:41:19.583 ' 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:19.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.583 --rc genhtml_branch_coverage=1 00:41:19.583 --rc genhtml_function_coverage=1 00:41:19.583 --rc genhtml_legend=1 00:41:19.583 --rc geninfo_all_blocks=1 00:41:19.583 --rc geninfo_unexecuted_blocks=1 00:41:19.583 00:41:19.583 ' 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:19.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.583 --rc genhtml_branch_coverage=1 00:41:19.583 --rc genhtml_function_coverage=1 00:41:19.583 --rc genhtml_legend=1 00:41:19.583 --rc geninfo_all_blocks=1 00:41:19.583 --rc geninfo_unexecuted_blocks=1 00:41:19.583 00:41:19.583 ' 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.583 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:19.584 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:24.857 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:24.858 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:24.858 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:24.858 Found net devices under 0000:af:00.0: cvl_0_0 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:24.858 Found net devices under 0000:af:00.1: cvl_0_1 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:24.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:24.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:41:24.858 00:41:24.858 --- 10.0.0.2 ping statistics --- 00:41:24.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.858 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:24.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:24.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:41:24.858 00:41:24.858 --- 10.0.0.1 ping statistics --- 00:41:24.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.858 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:24.858 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3947570 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3947570 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3947570 ']' 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:24.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:24.859 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:24.859 [2024-12-10 12:44:31.481393] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:24.859 [2024-12-10 12:44:31.483529] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:41:24.859 [2024-12-10 12:44:31.483597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:24.859 [2024-12-10 12:44:31.600027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:25.118 [2024-12-10 12:44:31.710473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:25.118 [2024-12-10 12:44:31.710515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:25.118 [2024-12-10 12:44:31.710527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:25.118 [2024-12-10 12:44:31.710536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:25.118 [2024-12-10 12:44:31.710546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:25.118 [2024-12-10 12:44:31.712650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:25.118 [2024-12-10 12:44:31.712666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:25.118 [2024-12-10 12:44:31.712757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:25.118 [2024-12-10 12:44:31.712769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:25.118 [2024-12-10 12:44:31.713227] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:25.686 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:25.686 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:41:25.686 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.687 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.946 [2024-12-10 12:44:32.554825] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:25.946 [2024-12-10 12:44:32.555705] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:25.946 [2024-12-10 12:44:32.556968] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:25.947 [2024-12-10 12:44:32.557846] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.947 [2024-12-10 12:44:32.569393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.947 Malloc0 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.947 [2024-12-10 12:44:32.697698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3947819 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3947821 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:25.947 { 00:41:25.947 "params": { 00:41:25.947 "name": "Nvme$subsystem", 00:41:25.947 "trtype": "$TEST_TRANSPORT", 00:41:25.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:25.947 "adrfam": "ipv4", 00:41:25.947 "trsvcid": "$NVMF_PORT", 00:41:25.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:25.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:25.947 "hdgst": ${hdgst:-false}, 00:41:25.947 "ddgst": ${ddgst:-false} 00:41:25.947 }, 00:41:25.947 "method": "bdev_nvme_attach_controller" 00:41:25.947 } 00:41:25.947 EOF 00:41:25.947 )") 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3947823 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3947826 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:25.947 { 00:41:25.947 "params": { 00:41:25.947 "name": "Nvme$subsystem", 00:41:25.947 "trtype": "$TEST_TRANSPORT", 00:41:25.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:25.947 "adrfam": "ipv4", 00:41:25.947 "trsvcid": "$NVMF_PORT", 00:41:25.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:25.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:25.947 "hdgst": ${hdgst:-false}, 00:41:25.947 "ddgst": ${ddgst:-false} 00:41:25.947 }, 00:41:25.947 "method": "bdev_nvme_attach_controller" 00:41:25.947 } 00:41:25.947 EOF 00:41:25.947 )") 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:25.947 { 00:41:25.947 "params": { 00:41:25.947 "name": "Nvme$subsystem", 00:41:25.947 "trtype": "$TEST_TRANSPORT", 00:41:25.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:25.947 "adrfam": "ipv4", 00:41:25.947 "trsvcid": "$NVMF_PORT", 00:41:25.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:25.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:25.947 "hdgst": ${hdgst:-false}, 00:41:25.947 "ddgst": ${ddgst:-false} 00:41:25.947 }, 00:41:25.947 "method": "bdev_nvme_attach_controller" 00:41:25.947 } 00:41:25.947 EOF 00:41:25.947 )") 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:25.947 { 00:41:25.947 "params": { 00:41:25.947 "name": "Nvme$subsystem", 00:41:25.947 "trtype": "$TEST_TRANSPORT", 00:41:25.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:25.947 "adrfam": "ipv4", 00:41:25.947 "trsvcid": "$NVMF_PORT", 00:41:25.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:25.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:25.947 "hdgst": ${hdgst:-false}, 00:41:25.947 "ddgst": ${ddgst:-false} 00:41:25.947 }, 00:41:25.947 "method": "bdev_nvme_attach_controller" 00:41:25.947 } 00:41:25.947 EOF 00:41:25.947 )") 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3947819 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:25.947 "params": { 00:41:25.947 "name": "Nvme1", 00:41:25.947 "trtype": "tcp", 00:41:25.947 "traddr": "10.0.0.2", 00:41:25.947 "adrfam": "ipv4", 00:41:25.947 "trsvcid": "4420", 00:41:25.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:25.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:25.947 "hdgst": false, 00:41:25.947 "ddgst": false 00:41:25.947 }, 00:41:25.947 "method": "bdev_nvme_attach_controller" 00:41:25.947 }' 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:25.947 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:25.947 "params": { 00:41:25.947 "name": "Nvme1", 00:41:25.947 "trtype": "tcp", 00:41:25.947 "traddr": "10.0.0.2", 00:41:25.947 "adrfam": "ipv4", 00:41:25.947 "trsvcid": "4420", 00:41:25.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:25.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:25.948 "hdgst": false, 00:41:25.948 "ddgst": false 00:41:25.948 }, 00:41:25.948 "method": "bdev_nvme_attach_controller" 00:41:25.948 }' 00:41:25.948 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:25.948 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:25.948 "params": { 00:41:25.948 "name": "Nvme1", 00:41:25.948 "trtype": "tcp", 00:41:25.948 "traddr": "10.0.0.2", 00:41:25.948 "adrfam": "ipv4", 00:41:25.948 "trsvcid": "4420", 00:41:25.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:25.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:25.948 "hdgst": false, 00:41:25.948 "ddgst": false 00:41:25.948 }, 00:41:25.948 "method": "bdev_nvme_attach_controller" 00:41:25.948 }' 00:41:25.948 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:25.948 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:25.948 "params": { 00:41:25.948 "name": "Nvme1", 00:41:25.948 "trtype": "tcp", 00:41:25.948 "traddr": "10.0.0.2", 00:41:25.948 "adrfam": "ipv4", 00:41:25.948 "trsvcid": "4420", 00:41:25.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:25.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:25.948 "hdgst": false, 00:41:25.948 "ddgst": false 00:41:25.948 }, 00:41:25.948 "method": "bdev_nvme_attach_controller" 00:41:25.948 }' 00:41:26.206 [2024-12-10 12:44:32.778123] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:41:26.206 [2024-12-10 12:44:32.778218] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:26.206 [2024-12-10 12:44:32.779151] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:41:26.206 [2024-12-10 12:44:32.779246] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:26.206 [2024-12-10 12:44:32.779270] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:41:26.206 [2024-12-10 12:44:32.779372] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:26.206 [2024-12-10 12:44:32.781002] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:41:26.207 [2024-12-10 12:44:32.781079] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:26.207 [2024-12-10 12:44:32.978094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.207 [2024-12-10 12:44:33.030783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.466 [2024-12-10 12:44:33.091298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:41:26.466 [2024-12-10 12:44:33.129573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.466 [2024-12-10 12:44:33.133123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:41:26.466 [2024-12-10 12:44:33.235052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:41:26.466 [2024-12-10 12:44:33.239054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.725 [2024-12-10 12:44:33.357664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:26.984 Running I/O for 1 seconds... 00:41:26.984 Running I/O for 1 seconds... 00:41:26.984 Running I/O for 1 seconds... 00:41:26.984 Running I/O for 1 seconds... 00:41:27.921 12555.00 IOPS, 49.04 MiB/s 00:41:27.921 Latency(us) 00:41:27.921 [2024-12-10T11:44:34.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:27.921 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:27.921 Nvme1n1 : 1.01 12620.05 49.30 0.00 0.00 10110.06 4025.78 12170.97 00:41:27.921 [2024-12-10T11:44:34.747Z] =================================================================================================================== 00:41:27.921 [2024-12-10T11:44:34.747Z] Total : 12620.05 49.30 0.00 0.00 10110.06 4025.78 12170.97 00:41:27.921 9585.00 IOPS, 37.44 MiB/s 00:41:27.921 Latency(us) 00:41:27.921 [2024-12-10T11:44:34.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:27.921 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:27.921 Nvme1n1 : 1.01 9637.94 37.65 0.00 0.00 13224.68 4868.39 17476.27 00:41:27.921 [2024-12-10T11:44:34.747Z] =================================================================================================================== 00:41:27.921 [2024-12-10T11:44:34.747Z] Total : 9637.94 37.65 0.00 0.00 13224.68 4868.39 17476.27 00:41:28.180 9815.00 IOPS, 38.34 MiB/s 00:41:28.180 Latency(us) 00:41:28.180 [2024-12-10T11:44:35.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:28.180 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:28.180 Nvme1n1 : 1.01 9904.82 38.69 0.00 0.00 12888.64 1856.85 21595.67 00:41:28.180 [2024-12-10T11:44:35.006Z] =================================================================================================================== 00:41:28.180 [2024-12-10T11:44:35.006Z] Total : 9904.82 38.69 0.00 0.00 12888.64 1856.85 21595.67 00:41:28.180 215472.00 IOPS, 841.69 MiB/s 00:41:28.180 Latency(us) 00:41:28.180 [2024-12-10T11:44:35.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:28.180 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:28.180 Nvme1n1 : 1.00 215117.53 840.30 0.00 0.00 592.08 265.26 1622.80 00:41:28.180 [2024-12-10T11:44:35.006Z] =================================================================================================================== 00:41:28.180 [2024-12-10T11:44:35.006Z] Total : 215117.53 840.30 0.00 0.00 592.08 265.26 1622.80 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3947821 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3947823 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3947826 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:28.749 rmmod nvme_tcp 00:41:28.749 rmmod nvme_fabrics 00:41:28.749 rmmod nvme_keyring 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3947570 ']' 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3947570 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3947570 ']' 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3947570 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:28.749 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947570 00:41:29.008 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:29.008 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:29.008 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947570' 00:41:29.008 killing process with pid 3947570 00:41:29.008 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3947570 00:41:29.008 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3947570 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:29.946 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:31.852 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:32.111 00:41:32.111 real 0m12.727s 00:41:32.111 user 0m22.808s 00:41:32.111 sys 0m6.666s 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.111 ************************************ 00:41:32.111 END TEST nvmf_bdev_io_wait 00:41:32.111 ************************************ 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:32.111 ************************************ 00:41:32.111 START TEST nvmf_queue_depth 00:41:32.111 ************************************ 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:32.111 * Looking for test storage... 00:41:32.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:41:32.111 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:32.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:32.112 --rc genhtml_branch_coverage=1 00:41:32.112 --rc genhtml_function_coverage=1 00:41:32.112 --rc genhtml_legend=1 00:41:32.112 --rc geninfo_all_blocks=1 00:41:32.112 --rc geninfo_unexecuted_blocks=1 00:41:32.112 00:41:32.112 ' 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:32.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:32.112 --rc genhtml_branch_coverage=1 00:41:32.112 --rc genhtml_function_coverage=1 00:41:32.112 --rc genhtml_legend=1 00:41:32.112 --rc geninfo_all_blocks=1 00:41:32.112 --rc geninfo_unexecuted_blocks=1 00:41:32.112 00:41:32.112 ' 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:32.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:32.112 --rc genhtml_branch_coverage=1 00:41:32.112 --rc genhtml_function_coverage=1 00:41:32.112 --rc genhtml_legend=1 00:41:32.112 --rc geninfo_all_blocks=1 00:41:32.112 --rc geninfo_unexecuted_blocks=1 00:41:32.112 00:41:32.112 ' 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:32.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:32.112 --rc genhtml_branch_coverage=1 00:41:32.112 --rc genhtml_function_coverage=1 00:41:32.112 --rc genhtml_legend=1 00:41:32.112 --rc geninfo_all_blocks=1 00:41:32.112 --rc geninfo_unexecuted_blocks=1 00:41:32.112 00:41:32.112 ' 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:32.112 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:32.372 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:37.647 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:37.647 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:37.647 Found net devices under 0000:af:00.0: cvl_0_0 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:37.647 Found net devices under 0000:af:00.1: cvl_0_1 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:37.647 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:37.647 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:37.647 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:37.647 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:37.647 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:37.647 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:37.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:37.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:41:37.648 00:41:37.648 --- 10.0.0.2 ping statistics --- 00:41:37.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.648 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:37.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:37.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:41:37.648 00:41:37.648 --- 10.0.0.1 ping statistics --- 00:41:37.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.648 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3951758 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3951758 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3951758 ']' 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:37.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:37.648 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:37.648 [2024-12-10 12:44:44.355126] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:37.648 [2024-12-10 12:44:44.357049] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:41:37.648 [2024-12-10 12:44:44.357113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:37.907 [2024-12-10 12:44:44.475853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:37.907 [2024-12-10 12:44:44.575251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:37.907 [2024-12-10 12:44:44.575292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:37.907 [2024-12-10 12:44:44.575306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:37.907 [2024-12-10 12:44:44.575314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:37.907 [2024-12-10 12:44:44.575324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:37.907 [2024-12-10 12:44:44.576703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:38.167 [2024-12-10 12:44:44.877183] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:38.167 [2024-12-10 12:44:44.877426] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:38.430 [2024-12-10 12:44:45.201559] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.430 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:38.692 Malloc0 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:38.692 [2024-12-10 12:44:45.333598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3951971 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3951971 /var/tmp/bdevperf.sock 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3951971 ']' 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:38.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:38.692 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:38.692 [2024-12-10 12:44:45.408464] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:41:38.692 [2024-12-10 12:44:45.408552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951971 ] 00:41:38.951 [2024-12-10 12:44:45.520723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:38.951 [2024-12-10 12:44:45.624835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:39.517 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:39.517 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:39.517 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:39.517 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.517 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.776 NVMe0n1 00:41:39.776 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.776 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:39.776 Running I/O for 10 seconds... 00:41:42.090 10240.00 IOPS, 40.00 MiB/s [2024-12-10T11:44:49.853Z] 10583.00 IOPS, 41.34 MiB/s [2024-12-10T11:44:50.790Z] 10586.00 IOPS, 41.35 MiB/s [2024-12-10T11:44:51.726Z] 10639.75 IOPS, 41.56 MiB/s [2024-12-10T11:44:52.663Z] 10653.80 IOPS, 41.62 MiB/s [2024-12-10T11:44:53.602Z] 10750.67 IOPS, 41.99 MiB/s [2024-12-10T11:44:54.537Z] 10781.86 IOPS, 42.12 MiB/s [2024-12-10T11:44:55.912Z] 10804.12 IOPS, 42.20 MiB/s [2024-12-10T11:44:56.847Z] 10809.22 IOPS, 42.22 MiB/s [2024-12-10T11:44:56.847Z] 10844.50 IOPS, 42.36 MiB/s 00:41:50.021 Latency(us) 00:41:50.021 [2024-12-10T11:44:56.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:50.021 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:50.021 Verification LBA range: start 0x0 length 0x4000 00:41:50.021 NVMe0n1 : 10.08 10853.08 42.39 0.00 0.00 93979.96 20721.86 63413.88 00:41:50.021 [2024-12-10T11:44:56.848Z] =================================================================================================================== 00:41:50.022 [2024-12-10T11:44:56.848Z] Total : 10853.08 42.39 0.00 0.00 93979.96 20721.86 63413.88 00:41:50.022 { 00:41:50.022 "results": [ 00:41:50.022 { 00:41:50.022 "job": "NVMe0n1", 00:41:50.022 "core_mask": "0x1", 00:41:50.022 "workload": "verify", 00:41:50.022 "status": "finished", 00:41:50.022 "verify_range": { 00:41:50.022 "start": 0, 00:41:50.022 "length": 16384 00:41:50.022 }, 00:41:50.022 "queue_depth": 1024, 00:41:50.022 "io_size": 4096, 00:41:50.022 "runtime": 10.080639, 00:41:50.022 "iops": 10853.081833403616, 00:41:50.022 "mibps": 42.394850911732874, 00:41:50.022 "io_failed": 0, 00:41:50.022 "io_timeout": 0, 00:41:50.022 "avg_latency_us": 93979.95769599125, 00:41:50.022 "min_latency_us": 20721.859047619047, 00:41:50.022 "max_latency_us": 63413.8819047619 00:41:50.022 } 00:41:50.022 ], 00:41:50.022 "core_count": 1 00:41:50.022 } 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3951971 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3951971 ']' 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3951971 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951971 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951971' 00:41:50.022 killing process with pid 3951971 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3951971 00:41:50.022 Received shutdown signal, test time was about 10.000000 seconds 00:41:50.022 00:41:50.022 Latency(us) 00:41:50.022 [2024-12-10T11:44:56.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:50.022 [2024-12-10T11:44:56.848Z] =================================================================================================================== 00:41:50.022 [2024-12-10T11:44:56.848Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:50.022 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3951971 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:50.959 rmmod nvme_tcp 00:41:50.959 rmmod nvme_fabrics 00:41:50.959 rmmod nvme_keyring 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3951758 ']' 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3951758 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3951758 ']' 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3951758 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951758 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951758' 00:41:50.959 killing process with pid 3951758 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3951758 00:41:50.959 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3951758 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:52.338 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:54.244 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:54.244 00:41:54.244 real 0m22.301s 00:41:54.244 user 0m26.899s 00:41:54.244 sys 0m6.160s 00:41:54.244 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:54.244 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:54.244 ************************************ 00:41:54.244 END TEST nvmf_queue_depth 00:41:54.244 ************************************ 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:54.504 ************************************ 00:41:54.504 START TEST nvmf_target_multipath 00:41:54.504 ************************************ 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:54.504 * Looking for test storage... 00:41:54.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:54.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.504 --rc genhtml_branch_coverage=1 00:41:54.504 --rc genhtml_function_coverage=1 00:41:54.504 --rc genhtml_legend=1 00:41:54.504 --rc geninfo_all_blocks=1 00:41:54.504 --rc geninfo_unexecuted_blocks=1 00:41:54.504 00:41:54.504 ' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:54.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.504 --rc genhtml_branch_coverage=1 00:41:54.504 --rc genhtml_function_coverage=1 00:41:54.504 --rc genhtml_legend=1 00:41:54.504 --rc geninfo_all_blocks=1 00:41:54.504 --rc geninfo_unexecuted_blocks=1 00:41:54.504 00:41:54.504 ' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:54.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.504 --rc genhtml_branch_coverage=1 00:41:54.504 --rc genhtml_function_coverage=1 00:41:54.504 --rc genhtml_legend=1 00:41:54.504 --rc geninfo_all_blocks=1 00:41:54.504 --rc geninfo_unexecuted_blocks=1 00:41:54.504 00:41:54.504 ' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:54.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.504 --rc genhtml_branch_coverage=1 00:41:54.504 --rc genhtml_function_coverage=1 00:41:54.504 --rc genhtml_legend=1 00:41:54.504 --rc geninfo_all_blocks=1 00:41:54.504 --rc geninfo_unexecuted_blocks=1 00:41:54.504 00:41:54.504 ' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.504 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:54.505 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:59.778 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:59.779 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:59.779 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:59.779 Found net devices under 0000:af:00.0: cvl_0_0 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:59.779 Found net devices under 0000:af:00.1: cvl_0_1 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:59.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:59.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:41:59.779 00:41:59.779 --- 10.0.0.2 ping statistics --- 00:41:59.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.779 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:59.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:59.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:41:59.779 00:41:59.779 --- 10.0.0.1 ping statistics --- 00:41:59.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.779 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:59.779 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:00.039 only one NIC for nvmf test 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:00.039 rmmod nvme_tcp 00:42:00.039 rmmod nvme_fabrics 00:42:00.039 rmmod nvme_keyring 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:00.039 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:01.945 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:02.205 00:42:02.205 real 0m7.680s 00:42:02.205 user 0m1.604s 00:42:02.205 sys 0m4.066s 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:02.205 ************************************ 00:42:02.205 END TEST nvmf_target_multipath 00:42:02.205 ************************************ 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:02.205 ************************************ 00:42:02.205 START TEST nvmf_zcopy 00:42:02.205 ************************************ 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:02.205 * Looking for test storage... 00:42:02.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:42:02.205 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:02.205 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.465 --rc genhtml_branch_coverage=1 00:42:02.465 --rc genhtml_function_coverage=1 00:42:02.465 --rc genhtml_legend=1 00:42:02.465 --rc geninfo_all_blocks=1 00:42:02.465 --rc geninfo_unexecuted_blocks=1 00:42:02.465 00:42:02.465 ' 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.465 --rc genhtml_branch_coverage=1 00:42:02.465 --rc genhtml_function_coverage=1 00:42:02.465 --rc genhtml_legend=1 00:42:02.465 --rc geninfo_all_blocks=1 00:42:02.465 --rc geninfo_unexecuted_blocks=1 00:42:02.465 00:42:02.465 ' 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.465 --rc genhtml_branch_coverage=1 00:42:02.465 --rc genhtml_function_coverage=1 00:42:02.465 --rc genhtml_legend=1 00:42:02.465 --rc geninfo_all_blocks=1 00:42:02.465 --rc geninfo_unexecuted_blocks=1 00:42:02.465 00:42:02.465 ' 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.465 --rc genhtml_branch_coverage=1 00:42:02.465 --rc genhtml_function_coverage=1 00:42:02.465 --rc genhtml_legend=1 00:42:02.465 --rc geninfo_all_blocks=1 00:42:02.465 --rc geninfo_unexecuted_blocks=1 00:42:02.465 00:42:02.465 ' 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:02.465 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:02.466 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:07.743 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:07.743 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:07.743 Found net devices under 0000:af:00.0: cvl_0_0 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:07.743 Found net devices under 0000:af:00.1: cvl_0_1 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:07.743 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:07.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:07.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:42:07.744 00:42:07.744 --- 10.0.0.2 ping statistics --- 00:42:07.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:07.744 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:07.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:07.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:42:07.744 00:42:07.744 --- 10.0.0.1 ping statistics --- 00:42:07.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:07.744 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3961212 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3961212 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3961212 ']' 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:07.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:07.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:07.744 [2024-12-10 12:45:14.497894] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:07.744 [2024-12-10 12:45:14.499914] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:42:07.744 [2024-12-10 12:45:14.499995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:08.003 [2024-12-10 12:45:14.615960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.003 [2024-12-10 12:45:14.719280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:08.003 [2024-12-10 12:45:14.719320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:08.003 [2024-12-10 12:45:14.719333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:08.003 [2024-12-10 12:45:14.719342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:08.003 [2024-12-10 12:45:14.719352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:08.003 [2024-12-10 12:45:14.720619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:08.262 [2024-12-10 12:45:15.025217] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:08.262 [2024-12-10 12:45:15.025465] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.521 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.521 [2024-12-10 12:45:15.345720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.780 [2024-12-10 12:45:15.373656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.780 malloc0 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:08.780 { 00:42:08.780 "params": { 00:42:08.780 "name": "Nvme$subsystem", 00:42:08.780 "trtype": "$TEST_TRANSPORT", 00:42:08.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:08.780 "adrfam": "ipv4", 00:42:08.780 "trsvcid": "$NVMF_PORT", 00:42:08.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:08.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:08.780 "hdgst": ${hdgst:-false}, 00:42:08.780 "ddgst": ${ddgst:-false} 00:42:08.780 }, 00:42:08.780 "method": "bdev_nvme_attach_controller" 00:42:08.780 } 00:42:08.780 EOF 00:42:08.780 )") 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:08.780 12:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:08.780 "params": { 00:42:08.780 "name": "Nvme1", 00:42:08.780 "trtype": "tcp", 00:42:08.780 "traddr": "10.0.0.2", 00:42:08.780 "adrfam": "ipv4", 00:42:08.780 "trsvcid": "4420", 00:42:08.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:08.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:08.780 "hdgst": false, 00:42:08.780 "ddgst": false 00:42:08.780 }, 00:42:08.780 "method": "bdev_nvme_attach_controller" 00:42:08.780 }' 00:42:08.780 [2024-12-10 12:45:15.531371] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:42:08.780 [2024-12-10 12:45:15.531456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961368 ] 00:42:09.044 [2024-12-10 12:45:15.643066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.044 [2024-12-10 12:45:15.750716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:09.385 Running I/O for 10 seconds... 00:42:11.752 7274.00 IOPS, 56.83 MiB/s [2024-12-10T11:45:19.515Z] 7337.50 IOPS, 57.32 MiB/s [2024-12-10T11:45:20.450Z] 7358.33 IOPS, 57.49 MiB/s [2024-12-10T11:45:21.387Z] 7354.75 IOPS, 57.46 MiB/s [2024-12-10T11:45:22.324Z] 7348.20 IOPS, 57.41 MiB/s [2024-12-10T11:45:23.260Z] 7356.00 IOPS, 57.47 MiB/s [2024-12-10T11:45:24.638Z] 7363.00 IOPS, 57.52 MiB/s [2024-12-10T11:45:25.573Z] 7360.62 IOPS, 57.50 MiB/s [2024-12-10T11:45:26.510Z] 7370.78 IOPS, 57.58 MiB/s [2024-12-10T11:45:26.510Z] 7373.50 IOPS, 57.61 MiB/s 00:42:19.684 Latency(us) 00:42:19.684 [2024-12-10T11:45:26.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:19.684 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:19.684 Verification LBA range: start 0x0 length 0x1000 00:42:19.684 Nvme1n1 : 10.05 7345.86 57.39 0.00 0.00 17309.58 2418.59 43191.34 00:42:19.684 [2024-12-10T11:45:26.510Z] =================================================================================================================== 00:42:19.684 [2024-12-10T11:45:26.510Z] Total : 7345.86 57.39 0.00 0.00 17309.58 2418.59 43191.34 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3963240 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:20.620 { 00:42:20.620 "params": { 00:42:20.620 "name": "Nvme$subsystem", 00:42:20.620 "trtype": "$TEST_TRANSPORT", 00:42:20.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:20.620 "adrfam": "ipv4", 00:42:20.620 "trsvcid": "$NVMF_PORT", 00:42:20.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:20.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:20.620 "hdgst": ${hdgst:-false}, 00:42:20.620 "ddgst": ${ddgst:-false} 00:42:20.620 }, 00:42:20.620 "method": "bdev_nvme_attach_controller" 00:42:20.620 } 00:42:20.620 EOF 00:42:20.620 )") 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:20.620 [2024-12-10 12:45:27.185210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.185253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:20.620 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:20.620 "params": { 00:42:20.620 "name": "Nvme1", 00:42:20.620 "trtype": "tcp", 00:42:20.620 "traddr": "10.0.0.2", 00:42:20.620 "adrfam": "ipv4", 00:42:20.620 "trsvcid": "4420", 00:42:20.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:20.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:20.620 "hdgst": false, 00:42:20.620 "ddgst": false 00:42:20.620 }, 00:42:20.620 "method": "bdev_nvme_attach_controller" 00:42:20.620 }' 00:42:20.620 [2024-12-10 12:45:27.197195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.197223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.209147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.209177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.221155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.221189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.233160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.233190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.245121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.245141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.250307] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:42:20.620 [2024-12-10 12:45:27.250380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963240 ] 00:42:20.620 [2024-12-10 12:45:27.257145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.257171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.269138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.269158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.281126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.281146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.293147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.293174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.305128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.305147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.317146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.317170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.329144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.329164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.341142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.341162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.353137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.353156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.363366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:20.620 [2024-12-10 12:45:27.365139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.365159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.377132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.377153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.389144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.389164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.401137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.401157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.413138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.413157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.425139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.425160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.620 [2024-12-10 12:45:27.437125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.620 [2024-12-10 12:45:27.437144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.449143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.449164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.461135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.461154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.473123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.473143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.476775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:20.880 [2024-12-10 12:45:27.485148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.485172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.497149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.497179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.509145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.509164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.521137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.521157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.533123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.533142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.545143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.545162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.557138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.557158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.569127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.569147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.581150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.581176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.593127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.593147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.605139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.605158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.617135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.617154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.629136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.629156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.641135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.641157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.653136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.653155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.665123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.665142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.677135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.677153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.689139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.689158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.880 [2024-12-10 12:45:27.701145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.880 [2024-12-10 12:45:27.701172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.713148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.713177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.725126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.725146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.737148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.737171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.749135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.749154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.761129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.761149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.773148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.773172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.785121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.785140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.797137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.797156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.809134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.809153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.821122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.821144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.833150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.833178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.845141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.845162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.857130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.857151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.869136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.869159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.881123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.881143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.893137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.893158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.905152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.905179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.917121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.917143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.139 [2024-12-10 12:45:27.959498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.139 [2024-12-10 12:45:27.959522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:27.969141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:27.969162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 Running I/O for 5 seconds... 00:42:21.398 [2024-12-10 12:45:27.982716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:27.982741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.000461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.000486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.013367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.013392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.026742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.026767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.044580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.044606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.058179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.058220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.076113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.076138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.089817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.089842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.107019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.107043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.123353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.123377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.398 [2024-12-10 12:45:28.138695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.398 [2024-12-10 12:45:28.138720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.399 [2024-12-10 12:45:28.156806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.399 [2024-12-10 12:45:28.156830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.399 [2024-12-10 12:45:28.170003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.399 [2024-12-10 12:45:28.170027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.399 [2024-12-10 12:45:28.187336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.399 [2024-12-10 12:45:28.187361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.399 [2024-12-10 12:45:28.202432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.399 [2024-12-10 12:45:28.202458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.399 [2024-12-10 12:45:28.219563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.399 [2024-12-10 12:45:28.219590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.235087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.235115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.251963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.251989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.265173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.265198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.277927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.277952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.289516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.289540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.306948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.306980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.322927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.322952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.340505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.340530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.353381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.353405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.365710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.657 [2024-12-10 12:45:28.365734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.657 [2024-12-10 12:45:28.383019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.658 [2024-12-10 12:45:28.383043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.658 [2024-12-10 12:45:28.399132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.658 [2024-12-10 12:45:28.399157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.658 [2024-12-10 12:45:28.416497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.658 [2024-12-10 12:45:28.416523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.658 [2024-12-10 12:45:28.429641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.658 [2024-12-10 12:45:28.429666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.658 [2024-12-10 12:45:28.446723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.658 [2024-12-10 12:45:28.446747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.658 [2024-12-10 12:45:28.463519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.658 [2024-12-10 12:45:28.463543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.658 [2024-12-10 12:45:28.479690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.658 [2024-12-10 12:45:28.479714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.495574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.495600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.512530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.512554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.525229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.525253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.538338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.538362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.555876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.555901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.568997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.569023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.582046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.582072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.599606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.599630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.616281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.616306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.630583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.630608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.647768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.647792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.660731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.660755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.675197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.675221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.692319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.692343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.705028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.705053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.717890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.717915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.917 [2024-12-10 12:45:28.735299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.917 [2024-12-10 12:45:28.735323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.748677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.748702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.764130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.764155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.779238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.779262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.795866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.795890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.811820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.811845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.825021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.825047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.838084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.838109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.855820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.855843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.869112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.869136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.883642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.883667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.899507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.899531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.916246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.916271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.929272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.929296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.942265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.942290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.960145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.960178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 [2024-12-10 12:45:28.973211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.973234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.176 14137.00 IOPS, 110.45 MiB/s [2024-12-10T11:45:29.002Z] [2024-12-10 12:45:28.986207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.176 [2024-12-10 12:45:28.986231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.003709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.003734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.016703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.016733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.031595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.031619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.048103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.048129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.063929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.063954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.077132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.077157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.090349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.090373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.107825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.107850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.123052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.123077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.140386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.140410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.153401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.153425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.435 [2024-12-10 12:45:29.166287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.435 [2024-12-10 12:45:29.166312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.436 [2024-12-10 12:45:29.183517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.436 [2024-12-10 12:45:29.183542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.436 [2024-12-10 12:45:29.196395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.436 [2024-12-10 12:45:29.196420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.436 [2024-12-10 12:45:29.212224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.436 [2024-12-10 12:45:29.212248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.436 [2024-12-10 12:45:29.227146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.436 [2024-12-10 12:45:29.227177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.436 [2024-12-10 12:45:29.244111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.436 [2024-12-10 12:45:29.244135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.436 [2024-12-10 12:45:29.258601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.436 [2024-12-10 12:45:29.258625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.276200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.276232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.289736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.289761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.306529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.306559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.323488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.323513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.336379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.336403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.352113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.352138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.366575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.366600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.383430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.383455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.399143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.399174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.415784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.415808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.430120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.430144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.447639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.447662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.464542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.464566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.478549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.478572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.495608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.495633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.695 [2024-12-10 12:45:29.508924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.695 [2024-12-10 12:45:29.508950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.524060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.524086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.538363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.538388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.556065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.556089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.568973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.568997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.583819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.583844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.598850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.598879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.616263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.616287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.630515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.630539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.647526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.647552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.663250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.663277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.680651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.680677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.693988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.694013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.710664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.710689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.727748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.727773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.741882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.741907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.759474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.759500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.954 [2024-12-10 12:45:29.772486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.954 [2024-12-10 12:45:29.772511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.213 [2024-12-10 12:45:29.787652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.213 [2024-12-10 12:45:29.787677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.213 [2024-12-10 12:45:29.803121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.213 [2024-12-10 12:45:29.803147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.819541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.819566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.835741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.835767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.852501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.852527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.866142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.866177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.883899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.883925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.896080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.896105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.911857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.911882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.927895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.927920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.944067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.944092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.959973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.959998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:29.975050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.975076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 14188.50 IOPS, 110.85 MiB/s [2024-12-10T11:45:30.040Z] [2024-12-10 12:45:29.991847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:29.991872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:30.005048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:30.005074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:30.015876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:30.015903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:30.029385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:30.029410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.214 [2024-12-10 12:45:30.038881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.214 [2024-12-10 12:45:30.038906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.053148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.473 [2024-12-10 12:45:30.053180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.066087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.473 [2024-12-10 12:45:30.066114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.084740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.473 [2024-12-10 12:45:30.084767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.097078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.473 [2024-12-10 12:45:30.097104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.111851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.473 [2024-12-10 12:45:30.111877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.126645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.473 [2024-12-10 12:45:30.126670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.144235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.473 [2024-12-10 12:45:30.144260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.157458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.473 [2024-12-10 12:45:30.157482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.473 [2024-12-10 12:45:30.175321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.474 [2024-12-10 12:45:30.175346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.474 [2024-12-10 12:45:30.190685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.474 [2024-12-10 12:45:30.190711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.474 [2024-12-10 12:45:30.207982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.474 [2024-12-10 12:45:30.208007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.474 [2024-12-10 12:45:30.222510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.474 [2024-12-10 12:45:30.222535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.474 [2024-12-10 12:45:30.240413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.474 [2024-12-10 12:45:30.240438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.474 [2024-12-10 12:45:30.253608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.474 [2024-12-10 12:45:30.253639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.474 [2024-12-10 12:45:30.271002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.474 [2024-12-10 12:45:30.271028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.474 [2024-12-10 12:45:30.287178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.474 [2024-12-10 12:45:30.287203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.304209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.304234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.320202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.320227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.336056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.336080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.351294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.351319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.368331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.368355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.383149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.383184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.400057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.400081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.415006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.415030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.432178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.432202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.447050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.447075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.463911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.463940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.478380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.478405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.495994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.496018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.508832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.508855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.521920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.521945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.539279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.539305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.733 [2024-12-10 12:45:30.555951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.733 [2024-12-10 12:45:30.555976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.569078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.569103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.582089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.582113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.599580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.599605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.612808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.612833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.625805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.625830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.643196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.643220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.660720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.660744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.674551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.674576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.692269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.692292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.706114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.706138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.723537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.723562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.737791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.737815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.755550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.755579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.768646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.768670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.781977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.782001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.799127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.799151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.992 [2024-12-10 12:45:30.815067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.992 [2024-12-10 12:45:30.815091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.251 [2024-12-10 12:45:30.832272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.251 [2024-12-10 12:45:30.832297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.251 [2024-12-10 12:45:30.846958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.251 [2024-12-10 12:45:30.846983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.251 [2024-12-10 12:45:30.864117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.251 [2024-12-10 12:45:30.864141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.251 [2024-12-10 12:45:30.878884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.251 [2024-12-10 12:45:30.878908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.251 [2024-12-10 12:45:30.896293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.251 [2024-12-10 12:45:30.896317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.251 [2024-12-10 12:45:30.908821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.251 [2024-12-10 12:45:30.908844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.251 [2024-12-10 12:45:30.922211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.251 [2024-12-10 12:45:30.922236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.251 [2024-12-10 12:45:30.939648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.251 [2024-12-10 12:45:30.939672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.252 [2024-12-10 12:45:30.952602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.252 [2024-12-10 12:45:30.952627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.252 [2024-12-10 12:45:30.967098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.252 [2024-12-10 12:45:30.967123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.252 [2024-12-10 12:45:30.984009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.252 [2024-12-10 12:45:30.984033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.252 14123.33 IOPS, 110.34 MiB/s [2024-12-10T11:45:31.078Z] [2024-12-10 12:45:30.998045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.252 [2024-12-10 12:45:30.998069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.252 [2024-12-10 12:45:31.015450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.252 [2024-12-10 12:45:31.015478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.252 [2024-12-10 12:45:31.031021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.252 [2024-12-10 12:45:31.031045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.252 [2024-12-10 12:45:31.047988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.252 [2024-12-10 12:45:31.048017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.252 [2024-12-10 12:45:31.064418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.252 [2024-12-10 12:45:31.064443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.077489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.077513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.089838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.089864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.107564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.107589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.124149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.124184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.137756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.137781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.155799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.155826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.168919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.168943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.182399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.182434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.199716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.199741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.213234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.213259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.226583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.226607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.244389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.244421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.256877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.256902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.271504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.271528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.286708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.286734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.304300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.304325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.317334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.317359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.511 [2024-12-10 12:45:31.330429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.511 [2024-12-10 12:45:31.330459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.347843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.347869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.360245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.360270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.376662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.376688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.389842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.389867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.407226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.407251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.423447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.423472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.438602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.438627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.455548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.455573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.470867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.470892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.488646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.488671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.501492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.501516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.518952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.518977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.535630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.535654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.550968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.550993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.568152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.568183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.770 [2024-12-10 12:45:31.582517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.770 [2024-12-10 12:45:31.582543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.599954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.599979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.613991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.614015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.632003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.632027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.645240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.645264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.658017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.658041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.675765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.675790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.690816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.690840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.708153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.708185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.721593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.721617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.739181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.739206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.755448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.755472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.771344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.771368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.787910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.787933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.802021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.802046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.819453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.819477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.835565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.835590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.030 [2024-12-10 12:45:31.852246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.030 [2024-12-10 12:45:31.852271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:31.863859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.863883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:31.880545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.880570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:31.893382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.893405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:31.910271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.910295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:31.927207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.927231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:31.944179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.944220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:31.957378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.957402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:31.971756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.971780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 14145.00 IOPS, 110.51 MiB/s [2024-12-10T11:45:32.115Z] [2024-12-10 12:45:31.987476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:31.987501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:32.003188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:32.003229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:32.020190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:32.020215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:32.032504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:32.032528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:32.048007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:32.048032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:32.063872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:32.063898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:32.079939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:32.079964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:32.095970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:32.095995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.289 [2024-12-10 12:45:32.112902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.289 [2024-12-10 12:45:32.112927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.126222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.126249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.143487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.143512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.158416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.158449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.175907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.175931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.188115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.188140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.203728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.203758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.218301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.218325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.235384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.235416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.251218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.251242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.267977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.268002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.283985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.284009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.299771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.299795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.315952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.315976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.331849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.331874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.347016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.347041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.549 [2024-12-10 12:45:32.364207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.549 [2024-12-10 12:45:32.364231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.376553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.376578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.391720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.391744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.408117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.408141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.422689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.422713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.440217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.440242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.452886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.452910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.467641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.467665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.483751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.483776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.500477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.500505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.513158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.513189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.526145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.526178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.543275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.543300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.559341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.559367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.575414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.575441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.590695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.590721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.607925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.607950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.808 [2024-12-10 12:45:32.623190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.808 [2024-12-10 12:45:32.623215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.640284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.640309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.652862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.652887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.667598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.667623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.682986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.683010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.700307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.700332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.713319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.713344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.726022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.726046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.743392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.743417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.759582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.759608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.775961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.067 [2024-12-10 12:45:32.775986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.067 [2024-12-10 12:45:32.789146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.068 [2024-12-10 12:45:32.789182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.068 [2024-12-10 12:45:32.803727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.068 [2024-12-10 12:45:32.803752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.068 [2024-12-10 12:45:32.818758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.068 [2024-12-10 12:45:32.818783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.068 [2024-12-10 12:45:32.836373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.068 [2024-12-10 12:45:32.836399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.068 [2024-12-10 12:45:32.850627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.068 [2024-12-10 12:45:32.850651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.068 [2024-12-10 12:45:32.867499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.068 [2024-12-10 12:45:32.867524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.068 [2024-12-10 12:45:32.884733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.068 [2024-12-10 12:45:32.884758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:32.897736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:32.897760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:32.915409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:32.915444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:32.932081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:32.932106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:32.944153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:32.944187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:32.959873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:32.959898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:32.975828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:32.975853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 14164.40 IOPS, 110.66 MiB/s [2024-12-10T11:45:33.153Z] [2024-12-10 12:45:32.991704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:32.991730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 00:42:26.327 Latency(us) 00:42:26.327 [2024-12-10T11:45:33.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.327 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:26.327 Nvme1n1 : 5.01 14167.51 110.68 0.00 0.00 9025.20 2340.57 15728.64 00:42:26.327 [2024-12-10T11:45:33.153Z] =================================================================================================================== 00:42:26.327 [2024-12-10T11:45:33.153Z] Total : 14167.51 110.68 0.00 0.00 9025.20 2340.57 15728.64 00:42:26.327 [2024-12-10 12:45:33.001147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.001177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.013369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.013391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.025141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.025162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.037137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.037158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.049186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.049218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.061161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.061188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.073125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.073145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.085148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.085172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.097152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.097177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.109126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.109145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.121136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.121155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.133125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.133144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.327 [2024-12-10 12:45:33.145158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.327 [2024-12-10 12:45:33.145188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.157145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.157174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.169121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.169146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.181151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.181176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.193135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.193154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.205120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.205139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.217136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.217154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.229120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.229140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.241134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.241153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.253140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.253159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.265119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.265138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.277138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.277156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.289137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.289157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.301118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.301137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.313135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.313154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.325134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.325153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.337139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.337159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.349139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.349158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.361130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.361149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.373140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.373159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.385139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.385159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.397122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.397142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.587 [2024-12-10 12:45:33.409170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.587 [2024-12-10 12:45:33.409193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.421133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.421154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.433149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.433173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.445137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.445157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.457124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.457143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.469148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.469171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.481137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.481156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.493130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.493149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.505141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.505160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.517122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.517141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.529135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.529154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.541136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.541155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.553119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.553137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.565133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.565151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.577137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.577156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.589136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.589159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.601140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.601160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.613139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.613159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.625137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.625157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.637140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.637159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.649123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.649142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.847 [2024-12-10 12:45:33.661137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.847 [2024-12-10 12:45:33.661156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.673259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.673279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.685153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.685178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.697138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.697162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.709122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.709140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.721148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.721170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.733134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.733153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.745121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.745140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.757159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.757183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.769137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.769156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.106 [2024-12-10 12:45:33.781124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.106 [2024-12-10 12:45:33.781143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.793158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.793182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.805123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.805142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.817136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.817155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.829134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.829154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.841119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.841138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.853136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.853156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.865138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.865158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.877125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.877144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 [2024-12-10 12:45:33.889139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.107 [2024-12-10 12:45:33.889158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3963240) - No such process 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3963240 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:27.107 delay0 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.107 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:27.366 [2024-12-10 12:45:34.065462] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:35.483 Initializing NVMe Controllers 00:42:35.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:35.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:35.483 Initialization complete. Launching workers. 00:42:35.483 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 12239 00:42:35.484 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12465, failed to submit 64 00:42:35.484 success 12345, unsuccessful 120, failed 0 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:35.484 rmmod nvme_tcp 00:42:35.484 rmmod nvme_fabrics 00:42:35.484 rmmod nvme_keyring 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3961212 ']' 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3961212 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3961212 ']' 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3961212 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3961212 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3961212' 00:42:35.484 killing process with pid 3961212 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3961212 00:42:35.484 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3961212 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:35.743 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:38.279 00:42:38.279 real 0m35.738s 00:42:38.279 user 0m47.665s 00:42:38.279 sys 0m12.871s 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:38.279 ************************************ 00:42:38.279 END TEST nvmf_zcopy 00:42:38.279 ************************************ 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:38.279 ************************************ 00:42:38.279 START TEST nvmf_nmic 00:42:38.279 ************************************ 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:38.279 * Looking for test storage... 00:42:38.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:38.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.279 --rc genhtml_branch_coverage=1 00:42:38.279 --rc genhtml_function_coverage=1 00:42:38.279 --rc genhtml_legend=1 00:42:38.279 --rc geninfo_all_blocks=1 00:42:38.279 --rc geninfo_unexecuted_blocks=1 00:42:38.279 00:42:38.279 ' 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:38.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.279 --rc genhtml_branch_coverage=1 00:42:38.279 --rc genhtml_function_coverage=1 00:42:38.279 --rc genhtml_legend=1 00:42:38.279 --rc geninfo_all_blocks=1 00:42:38.279 --rc geninfo_unexecuted_blocks=1 00:42:38.279 00:42:38.279 ' 00:42:38.279 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:38.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.279 --rc genhtml_branch_coverage=1 00:42:38.279 --rc genhtml_function_coverage=1 00:42:38.279 --rc genhtml_legend=1 00:42:38.279 --rc geninfo_all_blocks=1 00:42:38.279 --rc geninfo_unexecuted_blocks=1 00:42:38.279 00:42:38.279 ' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.280 --rc genhtml_branch_coverage=1 00:42:38.280 --rc genhtml_function_coverage=1 00:42:38.280 --rc genhtml_legend=1 00:42:38.280 --rc geninfo_all_blocks=1 00:42:38.280 --rc geninfo_unexecuted_blocks=1 00:42:38.280 00:42:38.280 ' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:38.280 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:43.548 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:43.549 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:43.549 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:43.549 Found net devices under 0000:af:00.0: cvl_0_0 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:43.549 Found net devices under 0000:af:00.1: cvl_0_1 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:43.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:43.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:42:43.549 00:42:43.549 --- 10.0.0.2 ping statistics --- 00:42:43.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.549 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:43.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:43.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:42:43.549 00:42:43.549 --- 10.0.0.1 ping statistics --- 00:42:43.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.549 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:43.549 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3968715 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3968715 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3968715 ']' 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:43.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:43.550 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:43.550 [2024-12-10 12:45:49.712826] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:43.550 [2024-12-10 12:45:49.714928] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:42:43.550 [2024-12-10 12:45:49.714998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:43.550 [2024-12-10 12:45:49.833303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:43.550 [2024-12-10 12:45:49.943275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:43.550 [2024-12-10 12:45:49.943318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:43.550 [2024-12-10 12:45:49.943329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:43.550 [2024-12-10 12:45:49.943339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:43.550 [2024-12-10 12:45:49.943350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:43.550 [2024-12-10 12:45:49.945590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.550 [2024-12-10 12:45:49.945665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:43.550 [2024-12-10 12:45:49.945726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.550 [2024-12-10 12:45:49.945738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:43.550 [2024-12-10 12:45:50.290544] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:43.550 [2024-12-10 12:45:50.291714] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:43.550 [2024-12-10 12:45:50.293181] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:43.550 [2024-12-10 12:45:50.294176] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:43.550 [2024-12-10 12:45:50.294498] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:43.809 [2024-12-10 12:45:50.562731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.809 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.068 Malloc0 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.068 [2024-12-10 12:45:50.674669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:44.068 test case1: single bdev can't be used in multiple subsystems 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.068 [2024-12-10 12:45:50.698373] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:44.068 [2024-12-10 12:45:50.698406] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:44.068 [2024-12-10 12:45:50.698418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.068 request: 00:42:44.068 { 00:42:44.068 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:44.068 "namespace": { 00:42:44.068 "bdev_name": "Malloc0", 00:42:44.068 "no_auto_visible": false, 00:42:44.068 "hide_metadata": false 00:42:44.068 }, 00:42:44.068 "method": "nvmf_subsystem_add_ns", 00:42:44.068 "req_id": 1 00:42:44.068 } 00:42:44.068 Got JSON-RPC error response 00:42:44.068 response: 00:42:44.068 { 00:42:44.068 "code": -32602, 00:42:44.068 "message": "Invalid parameters" 00:42:44.068 } 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:44.068 Adding namespace failed - expected result. 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:44.068 test case2: host connect to nvmf target in multiple paths 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.068 [2024-12-10 12:45:50.710472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.068 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:44.327 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:44.586 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:44.586 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:42:44.586 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:44.586 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:44.586 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:42:46.490 12:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:46.490 12:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:46.490 12:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:46.490 12:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:46.490 12:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:46.490 12:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:42:46.490 12:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:46.759 [global] 00:42:46.759 thread=1 00:42:46.759 invalidate=1 00:42:46.759 rw=write 00:42:46.759 time_based=1 00:42:46.759 runtime=1 00:42:46.759 ioengine=libaio 00:42:46.759 direct=1 00:42:46.759 bs=4096 00:42:46.759 iodepth=1 00:42:46.759 norandommap=0 00:42:46.759 numjobs=1 00:42:46.759 00:42:46.759 verify_dump=1 00:42:46.759 verify_backlog=512 00:42:46.759 verify_state_save=0 00:42:46.759 do_verify=1 00:42:46.759 verify=crc32c-intel 00:42:46.759 [job0] 00:42:46.759 filename=/dev/nvme0n1 00:42:46.759 Could not set queue depth (nvme0n1) 00:42:47.016 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:47.016 fio-3.35 00:42:47.016 Starting 1 thread 00:42:48.384 00:42:48.384 job0: (groupid=0, jobs=1): err= 0: pid=3969532: Tue Dec 10 12:45:54 2024 00:42:48.384 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:42:48.384 slat (nsec): min=9567, max=23672, avg=21923.68, stdev=2822.84 00:42:48.384 clat (usec): min=40879, max=41319, avg=40982.31, stdev=88.03 00:42:48.384 lat (usec): min=40902, max=41329, avg=41004.23, stdev=85.71 00:42:48.384 clat percentiles (usec): 00:42:48.384 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:48.384 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:48.384 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:48.384 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:48.384 | 99.99th=[41157] 00:42:48.384 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:42:48.384 slat (usec): min=10, max=27995, avg=66.27, stdev=1236.74 00:42:48.384 clat (usec): min=150, max=339, avg=165.99, stdev=17.31 00:42:48.384 lat (usec): min=161, max=28334, avg=232.26, stdev=1244.51 00:42:48.384 clat percentiles (usec): 00:42:48.384 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 157], 20.00th=[ 159], 00:42:48.384 | 30.00th=[ 161], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:42:48.384 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:42:48.384 | 99.00th=[ 247], 99.50th=[ 306], 99.90th=[ 338], 99.95th=[ 338], 00:42:48.384 | 99.99th=[ 338] 00:42:48.384 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:42:48.384 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:42:48.384 lat (usec) : 250=94.94%, 500=0.94% 00:42:48.384 lat (msec) : 50=4.12% 00:42:48.384 cpu : usr=0.49%, sys=0.88%, ctx=537, majf=0, minf=1 00:42:48.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.384 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:48.384 00:42:48.384 Run status group 0 (all jobs): 00:42:48.384 READ: bw=86.0KiB/s (88.1kB/s), 86.0KiB/s-86.0KiB/s (88.1kB/s-88.1kB/s), io=88.0KiB (90.1kB), run=1023-1023msec 00:42:48.384 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:42:48.384 00:42:48.384 Disk stats (read/write): 00:42:48.384 nvme0n1: ios=44/512, merge=0/0, ticks=1724/75, in_queue=1799, util=98.50% 00:42:48.384 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:48.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:48.641 rmmod nvme_tcp 00:42:48.641 rmmod nvme_fabrics 00:42:48.641 rmmod nvme_keyring 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3968715 ']' 00:42:48.641 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3968715 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3968715 ']' 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3968715 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3968715 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3968715' 00:42:48.642 killing process with pid 3968715 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3968715 00:42:48.642 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3968715 00:42:50.011 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:50.011 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:50.012 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:52.540 00:42:52.540 real 0m14.151s 00:42:52.540 user 0m26.974s 00:42:52.540 sys 0m5.263s 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:52.540 ************************************ 00:42:52.540 END TEST nvmf_nmic 00:42:52.540 ************************************ 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:52.540 ************************************ 00:42:52.540 START TEST nvmf_fio_target 00:42:52.540 ************************************ 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:52.540 * Looking for test storage... 00:42:52.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:42:52.540 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.540 --rc genhtml_branch_coverage=1 00:42:52.540 --rc genhtml_function_coverage=1 00:42:52.540 --rc genhtml_legend=1 00:42:52.540 --rc geninfo_all_blocks=1 00:42:52.540 --rc geninfo_unexecuted_blocks=1 00:42:52.540 00:42:52.540 ' 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.540 --rc genhtml_branch_coverage=1 00:42:52.540 --rc genhtml_function_coverage=1 00:42:52.540 --rc genhtml_legend=1 00:42:52.540 --rc geninfo_all_blocks=1 00:42:52.540 --rc geninfo_unexecuted_blocks=1 00:42:52.540 00:42:52.540 ' 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.540 --rc genhtml_branch_coverage=1 00:42:52.540 --rc genhtml_function_coverage=1 00:42:52.540 --rc genhtml_legend=1 00:42:52.540 --rc geninfo_all_blocks=1 00:42:52.540 --rc geninfo_unexecuted_blocks=1 00:42:52.540 00:42:52.540 ' 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.540 --rc genhtml_branch_coverage=1 00:42:52.540 --rc genhtml_function_coverage=1 00:42:52.540 --rc genhtml_legend=1 00:42:52.540 --rc geninfo_all_blocks=1 00:42:52.540 --rc geninfo_unexecuted_blocks=1 00:42:52.540 00:42:52.540 ' 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:52.540 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:52.541 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:57.796 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:57.797 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:57.797 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:57.797 Found net devices under 0000:af:00.0: cvl_0_0 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:57.797 Found net devices under 0000:af:00.1: cvl_0_1 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:57.797 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:58.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:58.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:42:58.055 00:42:58.055 --- 10.0.0.2 ping statistics --- 00:42:58.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:58.055 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:58.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:58.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:42:58.055 00:42:58.055 --- 10.0.0.1 ping statistics --- 00:42:58.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:58.055 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3973438 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3973438 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3973438 ']' 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:58.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:58.055 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:58.056 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:58.056 [2024-12-10 12:46:04.879545] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:58.313 [2024-12-10 12:46:04.881495] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:42:58.313 [2024-12-10 12:46:04.881578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:58.313 [2024-12-10 12:46:04.996593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:58.313 [2024-12-10 12:46:05.112918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:58.313 [2024-12-10 12:46:05.112959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:58.313 [2024-12-10 12:46:05.112970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:58.313 [2024-12-10 12:46:05.112979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:58.313 [2024-12-10 12:46:05.112988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:58.313 [2024-12-10 12:46:05.115228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:58.313 [2024-12-10 12:46:05.115267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:58.313 [2024-12-10 12:46:05.115353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:58.313 [2024-12-10 12:46:05.115364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:58.876 [2024-12-10 12:46:05.432756] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:58.876 [2024-12-10 12:46:05.434222] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:58.876 [2024-12-10 12:46:05.435881] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:58.876 [2024-12-10 12:46:05.437085] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:58.876 [2024-12-10 12:46:05.437420] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:58.876 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:58.876 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:42:58.876 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:58.877 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:58.877 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.134 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:59.134 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:59.134 [2024-12-10 12:46:05.896438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:59.134 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:59.390 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:42:59.390 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:59.661 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:42:59.661 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:59.919 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:42:59.919 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:00.176 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:00.176 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:00.434 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:00.692 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:00.692 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:00.949 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:00.949 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:01.207 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:01.207 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:01.464 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:01.724 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:01.724 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:01.724 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:01.724 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:02.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:02.286 [2024-12-10 12:46:08.924287] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:02.286 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:02.557 12:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:02.557 12:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:03.120 12:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:03.120 12:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:43:03.120 12:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:03.120 12:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:43:03.120 12:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:43:03.120 12:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:43:05.017 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:05.017 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:05.017 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:05.017 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:43:05.017 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:05.017 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:43:05.017 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:05.017 [global] 00:43:05.017 thread=1 00:43:05.017 invalidate=1 00:43:05.017 rw=write 00:43:05.017 time_based=1 00:43:05.017 runtime=1 00:43:05.017 ioengine=libaio 00:43:05.017 direct=1 00:43:05.017 bs=4096 00:43:05.017 iodepth=1 00:43:05.017 norandommap=0 00:43:05.017 numjobs=1 00:43:05.017 00:43:05.017 verify_dump=1 00:43:05.017 verify_backlog=512 00:43:05.017 verify_state_save=0 00:43:05.017 do_verify=1 00:43:05.017 verify=crc32c-intel 00:43:05.017 [job0] 00:43:05.017 filename=/dev/nvme0n1 00:43:05.017 [job1] 00:43:05.017 filename=/dev/nvme0n2 00:43:05.017 [job2] 00:43:05.017 filename=/dev/nvme0n3 00:43:05.017 [job3] 00:43:05.017 filename=/dev/nvme0n4 00:43:05.017 Could not set queue depth (nvme0n1) 00:43:05.017 Could not set queue depth (nvme0n2) 00:43:05.017 Could not set queue depth (nvme0n3) 00:43:05.017 Could not set queue depth (nvme0n4) 00:43:05.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.275 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.275 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.275 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.275 fio-3.35 00:43:05.275 Starting 4 threads 00:43:06.645 00:43:06.645 job0: (groupid=0, jobs=1): err= 0: pid=3974756: Tue Dec 10 12:46:13 2024 00:43:06.645 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:43:06.645 slat (nsec): min=9910, max=30104, avg=22443.36, stdev=3371.82 00:43:06.645 clat (usec): min=40483, max=41971, avg=40993.65, stdev=245.95 00:43:06.645 lat (usec): min=40493, max=42001, avg=41016.09, stdev=248.62 00:43:06.645 clat percentiles (usec): 00:43:06.645 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:06.645 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:06.645 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:06.645 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:06.645 | 99.99th=[42206] 00:43:06.645 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:43:06.645 slat (nsec): min=10730, max=41525, avg=14166.03, stdev=2387.86 00:43:06.646 clat (usec): min=169, max=391, avg=236.03, stdev=36.39 00:43:06.646 lat (usec): min=182, max=411, avg=250.20, stdev=37.55 00:43:06.646 clat percentiles (usec): 00:43:06.646 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 200], 00:43:06.646 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 241], 60.00th=[ 251], 00:43:06.646 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:43:06.646 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 392], 99.95th=[ 392], 00:43:06.646 | 99.99th=[ 392] 00:43:06.646 bw ( KiB/s): min= 4096, max= 4096, per=25.84%, avg=4096.00, stdev= 0.00, samples=1 00:43:06.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:06.646 lat (usec) : 250=55.24%, 500=40.64% 00:43:06.646 lat (msec) : 50=4.12% 00:43:06.646 cpu : usr=0.00%, sys=1.55%, ctx=536, majf=0, minf=2 00:43:06.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.646 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:06.646 job1: (groupid=0, jobs=1): err= 0: pid=3974757: Tue Dec 10 12:46:13 2024 00:43:06.646 read: IOPS=35, BW=143KiB/s (147kB/s)(144KiB/1004msec) 00:43:06.646 slat (nsec): min=6910, max=25762, avg=12907.81, stdev=6907.76 00:43:06.646 clat (usec): min=252, max=42013, avg=24259.51, stdev=20491.00 00:43:06.646 lat (usec): min=259, max=42024, avg=24272.42, stdev=20494.58 00:43:06.646 clat percentiles (usec): 00:43:06.646 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 322], 20.00th=[ 347], 00:43:06.646 | 30.00th=[ 408], 40.00th=[ 445], 50.00th=[41157], 60.00th=[41157], 00:43:06.646 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:43:06.646 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:06.646 | 99.99th=[42206] 00:43:06.646 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:43:06.646 slat (nsec): min=9622, max=37828, avg=11113.01, stdev=1674.96 00:43:06.646 clat (usec): min=158, max=345, avg=239.09, stdev=35.31 00:43:06.646 lat (usec): min=169, max=369, avg=250.20, stdev=35.33 00:43:06.646 clat percentiles (usec): 00:43:06.646 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 204], 00:43:06.646 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 251], 60.00th=[ 260], 00:43:06.646 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 285], 00:43:06.646 | 99.00th=[ 302], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:43:06.646 | 99.99th=[ 347] 00:43:06.646 bw ( KiB/s): min= 4096, max= 4096, per=25.84%, avg=4096.00, stdev= 0.00, samples=1 00:43:06.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:06.646 lat (usec) : 250=46.53%, 500=49.64% 00:43:06.646 lat (msec) : 50=3.83% 00:43:06.646 cpu : usr=0.30%, sys=0.50%, ctx=549, majf=0, minf=1 00:43:06.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.646 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:06.646 job2: (groupid=0, jobs=1): err= 0: pid=3974758: Tue Dec 10 12:46:13 2024 00:43:06.646 read: IOPS=510, BW=2042KiB/s (2091kB/s)(2116KiB/1036msec) 00:43:06.646 slat (nsec): min=6671, max=27341, avg=8285.31, stdev=2859.18 00:43:06.646 clat (usec): min=221, max=42111, avg=1573.40, stdev=7242.02 00:43:06.646 lat (usec): min=232, max=42132, avg=1581.69, stdev=7244.51 00:43:06.646 clat percentiles (usec): 00:43:06.646 | 1.00th=[ 237], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 245], 00:43:06.646 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:43:06.646 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:43:06.646 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:06.646 | 99.99th=[42206] 00:43:06.646 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:43:06.646 slat (nsec): min=7344, max=29674, avg=11182.95, stdev=1893.31 00:43:06.646 clat (usec): min=146, max=292, avg=179.06, stdev=14.35 00:43:06.646 lat (usec): min=157, max=321, avg=190.24, stdev=15.19 00:43:06.646 clat percentiles (usec): 00:43:06.646 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:43:06.646 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:43:06.646 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:43:06.646 | 99.00th=[ 210], 99.50th=[ 225], 99.90th=[ 260], 99.95th=[ 293], 00:43:06.646 | 99.99th=[ 293] 00:43:06.646 bw ( KiB/s): min= 8192, max= 8192, per=51.67%, avg=8192.00, stdev= 0.00, samples=1 00:43:06.646 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:06.646 lat (usec) : 250=78.75%, 500=20.15% 00:43:06.646 lat (msec) : 50=1.09% 00:43:06.646 cpu : usr=0.87%, sys=1.45%, ctx=1553, majf=0, minf=2 00:43:06.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.646 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:06.646 job3: (groupid=0, jobs=1): err= 0: pid=3974759: Tue Dec 10 12:46:13 2024 00:43:06.646 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:43:06.646 slat (nsec): min=7294, max=38037, avg=8384.88, stdev=1492.45 00:43:06.646 clat (usec): min=225, max=502, avg=265.11, stdev=19.30 00:43:06.646 lat (usec): min=243, max=517, avg=273.50, stdev=19.42 00:43:06.646 clat percentiles (usec): 00:43:06.646 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:43:06.646 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:43:06.646 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 297], 00:43:06.646 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 392], 99.95th=[ 400], 00:43:06.646 | 99.99th=[ 502] 00:43:06.646 write: IOPS=2055, BW=8224KiB/s (8421kB/s)(8232KiB/1001msec); 0 zone resets 00:43:06.646 slat (nsec): min=8627, max=42724, avg=12234.27, stdev=1969.20 00:43:06.646 clat (usec): min=143, max=1235, avg=195.56, stdev=42.56 00:43:06.646 lat (usec): min=169, max=1246, avg=207.80, stdev=42.78 00:43:06.646 clat percentiles (usec): 00:43:06.646 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:43:06.646 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:43:06.646 | 70.00th=[ 200], 80.00th=[ 227], 90.00th=[ 258], 95.00th=[ 273], 00:43:06.646 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 375], 99.95th=[ 441], 00:43:06.646 | 99.99th=[ 1237] 00:43:06.646 bw ( KiB/s): min= 8192, max= 8192, per=51.67%, avg=8192.00, stdev= 0.00, samples=1 00:43:06.646 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:06.646 lat (usec) : 250=54.02%, 500=45.93%, 750=0.02% 00:43:06.646 lat (msec) : 2=0.02% 00:43:06.646 cpu : usr=2.40%, sys=7.50%, ctx=4109, majf=0, minf=1 00:43:06.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.646 issued rwts: total=2048,2058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:06.646 00:43:06.646 Run status group 0 (all jobs): 00:43:06.646 READ: bw=9.93MiB/s (10.4MB/s), 85.2KiB/s-8184KiB/s (87.2kB/s-8380kB/s), io=10.3MiB (10.8MB), run=1001-1036msec 00:43:06.646 WRITE: bw=15.5MiB/s (16.2MB/s), 1983KiB/s-8224KiB/s (2030kB/s-8421kB/s), io=16.0MiB (16.8MB), run=1001-1036msec 00:43:06.646 00:43:06.646 Disk stats (read/write): 00:43:06.646 nvme0n1: ios=70/512, merge=0/0, ticks=1407/114, in_queue=1521, util=85.97% 00:43:06.646 nvme0n2: ios=59/512, merge=0/0, ticks=1615/125, in_queue=1740, util=90.13% 00:43:06.646 nvme0n3: ios=581/1024, merge=0/0, ticks=697/175, in_queue=872, util=94.69% 00:43:06.646 nvme0n4: ios=1593/2011, merge=0/0, ticks=1209/372, in_queue=1581, util=94.23% 00:43:06.646 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:06.646 [global] 00:43:06.646 thread=1 00:43:06.646 invalidate=1 00:43:06.646 rw=randwrite 00:43:06.646 time_based=1 00:43:06.646 runtime=1 00:43:06.646 ioengine=libaio 00:43:06.646 direct=1 00:43:06.646 bs=4096 00:43:06.646 iodepth=1 00:43:06.646 norandommap=0 00:43:06.646 numjobs=1 00:43:06.646 00:43:06.646 verify_dump=1 00:43:06.646 verify_backlog=512 00:43:06.646 verify_state_save=0 00:43:06.646 do_verify=1 00:43:06.646 verify=crc32c-intel 00:43:06.646 [job0] 00:43:06.646 filename=/dev/nvme0n1 00:43:06.646 [job1] 00:43:06.646 filename=/dev/nvme0n2 00:43:06.646 [job2] 00:43:06.646 filename=/dev/nvme0n3 00:43:06.646 [job3] 00:43:06.646 filename=/dev/nvme0n4 00:43:06.646 Could not set queue depth (nvme0n1) 00:43:06.646 Could not set queue depth (nvme0n2) 00:43:06.646 Could not set queue depth (nvme0n3) 00:43:06.646 Could not set queue depth (nvme0n4) 00:43:06.904 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:06.904 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:06.904 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:06.904 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:06.904 fio-3.35 00:43:06.904 Starting 4 threads 00:43:08.278 00:43:08.278 job0: (groupid=0, jobs=1): err= 0: pid=3975125: Tue Dec 10 12:46:14 2024 00:43:08.278 read: IOPS=921, BW=3684KiB/s (3773kB/s)(3780KiB/1026msec) 00:43:08.278 slat (nsec): min=6893, max=32410, avg=8039.55, stdev=2349.74 00:43:08.278 clat (usec): min=195, max=41199, avg=867.94, stdev=5099.04 00:43:08.278 lat (usec): min=204, max=41211, avg=875.98, stdev=5100.96 00:43:08.278 clat percentiles (usec): 00:43:08.278 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 206], 00:43:08.278 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 212], 00:43:08.278 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 243], 95.00th=[ 269], 00:43:08.278 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:08.278 | 99.99th=[41157] 00:43:08.278 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:43:08.278 slat (nsec): min=10072, max=47856, avg=11321.77, stdev=2050.02 00:43:08.278 clat (usec): min=132, max=808, avg=176.32, stdev=41.59 00:43:08.278 lat (usec): min=152, max=819, avg=187.64, stdev=41.97 00:43:08.278 clat percentiles (usec): 00:43:08.278 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 147], 20.00th=[ 149], 00:43:08.278 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 172], 60.00th=[ 184], 00:43:08.278 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 241], 00:43:08.278 | 99.00th=[ 314], 99.50th=[ 404], 99.90th=[ 553], 99.95th=[ 807], 00:43:08.278 | 99.99th=[ 807] 00:43:08.278 bw ( KiB/s): min= 8192, max= 8192, per=58.63%, avg=8192.00, stdev= 0.00, samples=1 00:43:08.278 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:08.278 lat (usec) : 250=94.97%, 500=4.06%, 750=0.10%, 1000=0.05% 00:43:08.278 lat (msec) : 10=0.05%, 50=0.76% 00:43:08.278 cpu : usr=1.66%, sys=2.93%, ctx=1970, majf=0, minf=1 00:43:08.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.278 issued rwts: total=945,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.278 job1: (groupid=0, jobs=1): err= 0: pid=3975126: Tue Dec 10 12:46:14 2024 00:43:08.278 read: IOPS=1156, BW=4627KiB/s (4738kB/s)(4632KiB/1001msec) 00:43:08.278 slat (nsec): min=7097, max=44105, avg=8274.06, stdev=2348.67 00:43:08.278 clat (usec): min=207, max=41063, avg=590.73, stdev=3766.29 00:43:08.278 lat (usec): min=215, max=41086, avg=599.01, stdev=3767.62 00:43:08.278 clat percentiles (usec): 00:43:08.278 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:43:08.278 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:43:08.278 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 260], 00:43:08.278 | 99.00th=[ 408], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:08.278 | 99.99th=[41157] 00:43:08.278 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:43:08.278 slat (usec): min=3, max=28888, avg=29.80, stdev=736.83 00:43:08.278 clat (usec): min=133, max=400, avg=164.54, stdev=25.23 00:43:08.278 lat (usec): min=143, max=29078, avg=194.34, stdev=737.92 00:43:08.278 clat percentiles (usec): 00:43:08.278 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:43:08.278 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 161], 00:43:08.278 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 206], 00:43:08.278 | 99.00th=[ 251], 99.50th=[ 285], 99.90th=[ 343], 99.95th=[ 400], 00:43:08.278 | 99.99th=[ 400] 00:43:08.278 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:43:08.278 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:08.278 lat (usec) : 250=92.24%, 500=7.39% 00:43:08.278 lat (msec) : 50=0.37% 00:43:08.278 cpu : usr=1.80%, sys=4.60%, ctx=2696, majf=0, minf=1 00:43:08.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.278 issued rwts: total=1158,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.278 job2: (groupid=0, jobs=1): err= 0: pid=3975127: Tue Dec 10 12:46:14 2024 00:43:08.278 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:43:08.278 slat (nsec): min=9638, max=24999, avg=22984.86, stdev=3063.30 00:43:08.278 clat (usec): min=40836, max=41460, avg=40991.19, stdev=122.81 00:43:08.278 lat (usec): min=40860, max=41470, avg=41014.18, stdev=120.22 00:43:08.278 clat percentiles (usec): 00:43:08.278 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:08.278 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:08.278 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:08.278 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:43:08.278 | 99.99th=[41681] 00:43:08.278 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:43:08.278 slat (nsec): min=9555, max=37797, avg=10724.00, stdev=1572.56 00:43:08.278 clat (usec): min=177, max=441, avg=203.15, stdev=17.37 00:43:08.278 lat (usec): min=186, max=478, avg=213.87, stdev=18.16 00:43:08.278 clat percentiles (usec): 00:43:08.278 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 192], 00:43:08.278 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:43:08.278 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 233], 00:43:08.278 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 441], 99.95th=[ 441], 00:43:08.278 | 99.99th=[ 441] 00:43:08.278 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:43:08.278 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:08.278 lat (usec) : 250=95.32%, 500=0.56% 00:43:08.278 lat (msec) : 50=4.12% 00:43:08.278 cpu : usr=0.20%, sys=0.59%, ctx=535, majf=0, minf=1 00:43:08.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.278 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.278 job3: (groupid=0, jobs=1): err= 0: pid=3975129: Tue Dec 10 12:46:14 2024 00:43:08.278 read: IOPS=261, BW=1047KiB/s (1072kB/s)(1048KiB/1001msec) 00:43:08.278 slat (nsec): min=8049, max=31253, avg=11661.69, stdev=5547.84 00:43:08.278 clat (usec): min=227, max=41835, avg=3390.45, stdev=10843.51 00:43:08.278 lat (usec): min=236, max=41859, avg=3402.12, stdev=10846.80 00:43:08.278 clat percentiles (usec): 00:43:08.278 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 265], 20.00th=[ 269], 00:43:08.278 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 277], 60.00th=[ 281], 00:43:08.278 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[41157], 00:43:08.278 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:43:08.278 | 99.99th=[41681] 00:43:08.278 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:43:08.278 slat (nsec): min=8263, max=95322, avg=11291.40, stdev=4255.05 00:43:08.278 clat (usec): min=175, max=494, avg=196.41, stdev=23.00 00:43:08.279 lat (usec): min=185, max=528, avg=207.70, stdev=25.10 00:43:08.279 clat percentiles (usec): 00:43:08.279 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:43:08.279 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:43:08.279 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 219], 00:43:08.279 | 99.00th=[ 306], 99.50th=[ 347], 99.90th=[ 494], 99.95th=[ 494], 00:43:08.279 | 99.99th=[ 494] 00:43:08.279 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:43:08.279 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:08.279 lat (usec) : 250=65.37%, 500=32.04% 00:43:08.279 lat (msec) : 50=2.58% 00:43:08.279 cpu : usr=0.30%, sys=0.90%, ctx=776, majf=0, minf=1 00:43:08.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.279 issued rwts: total=262,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.279 00:43:08.279 Run status group 0 (all jobs): 00:43:08.279 READ: bw=9306KiB/s (9529kB/s), 86.9KiB/s-4627KiB/s (89.0kB/s-4738kB/s), io=9548KiB (9777kB), run=1001-1026msec 00:43:08.279 WRITE: bw=13.6MiB/s (14.3MB/s), 2022KiB/s-6138KiB/s (2070kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1026msec 00:43:08.279 00:43:08.279 Disk stats (read/write): 00:43:08.279 nvme0n1: ios=978/1024, merge=0/0, ticks=1522/171, in_queue=1693, util=96.69% 00:43:08.279 nvme0n2: ios=980/1024, merge=0/0, ticks=914/160, in_queue=1074, util=98.27% 00:43:08.279 nvme0n3: ios=65/512, merge=0/0, ticks=1495/100, in_queue=1595, util=97.09% 00:43:08.279 nvme0n4: ios=50/512, merge=0/0, ticks=1332/93, in_queue=1425, util=99.37% 00:43:08.279 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:08.279 [global] 00:43:08.279 thread=1 00:43:08.279 invalidate=1 00:43:08.279 rw=write 00:43:08.279 time_based=1 00:43:08.279 runtime=1 00:43:08.279 ioengine=libaio 00:43:08.279 direct=1 00:43:08.279 bs=4096 00:43:08.279 iodepth=128 00:43:08.279 norandommap=0 00:43:08.279 numjobs=1 00:43:08.279 00:43:08.279 verify_dump=1 00:43:08.279 verify_backlog=512 00:43:08.279 verify_state_save=0 00:43:08.279 do_verify=1 00:43:08.279 verify=crc32c-intel 00:43:08.279 [job0] 00:43:08.279 filename=/dev/nvme0n1 00:43:08.279 [job1] 00:43:08.279 filename=/dev/nvme0n2 00:43:08.279 [job2] 00:43:08.279 filename=/dev/nvme0n3 00:43:08.279 [job3] 00:43:08.279 filename=/dev/nvme0n4 00:43:08.279 Could not set queue depth (nvme0n1) 00:43:08.279 Could not set queue depth (nvme0n2) 00:43:08.279 Could not set queue depth (nvme0n3) 00:43:08.279 Could not set queue depth (nvme0n4) 00:43:08.535 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:08.535 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:08.535 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:08.535 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:08.535 fio-3.35 00:43:08.535 Starting 4 threads 00:43:09.929 00:43:09.929 job0: (groupid=0, jobs=1): err= 0: pid=3975494: Tue Dec 10 12:46:16 2024 00:43:09.929 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:43:09.929 slat (nsec): min=1547, max=42956k, avg=128108.55, stdev=1018092.69 00:43:09.929 clat (usec): min=7539, max=59814, avg=16623.26, stdev=10508.84 00:43:09.929 lat (usec): min=8913, max=59821, avg=16751.37, stdev=10543.30 00:43:09.929 clat percentiles (usec): 00:43:09.929 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11469], 00:43:09.929 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[13304], 00:43:09.929 | 70.00th=[16188], 80.00th=[17957], 90.00th=[21890], 95.00th=[50594], 00:43:09.929 | 99.00th=[52691], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:43:09.929 | 99.99th=[60031] 00:43:09.929 write: IOPS=3709, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1005msec); 0 zone resets 00:43:09.929 slat (usec): min=2, max=13434, avg=139.47, stdev=746.88 00:43:09.929 clat (usec): min=4313, max=98352, avg=18109.86, stdev=16145.48 00:43:09.929 lat (usec): min=6104, max=98368, avg=18249.34, stdev=16225.94 00:43:09.929 clat percentiles (usec): 00:43:09.929 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[11338], 00:43:09.929 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:43:09.929 | 70.00th=[12780], 80.00th=[21890], 90.00th=[29754], 95.00th=[59507], 00:43:09.929 | 99.00th=[95945], 99.50th=[96994], 99.90th=[98042], 99.95th=[98042], 00:43:09.929 | 99.99th=[98042] 00:43:09.929 bw ( KiB/s): min= 8328, max=20480, per=21.29%, avg=14404.00, stdev=8592.76, samples=2 00:43:09.929 iops : min= 2082, max= 5120, avg=3601.00, stdev=2148.19, samples=2 00:43:09.929 lat (msec) : 10=5.37%, 20=77.31%, 50=11.08%, 100=6.24% 00:43:09.929 cpu : usr=2.59%, sys=4.08%, ctx=383, majf=0, minf=1 00:43:09.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:43:09.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:09.930 issued rwts: total=3584,3728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:09.930 job1: (groupid=0, jobs=1): err= 0: pid=3975495: Tue Dec 10 12:46:16 2024 00:43:09.930 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:43:09.930 slat (nsec): min=1385, max=30390k, avg=113738.62, stdev=763704.88 00:43:09.930 clat (usec): min=8064, max=89157, avg=14504.86, stdev=11404.48 00:43:09.930 lat (usec): min=8346, max=89167, avg=14618.59, stdev=11463.03 00:43:09.930 clat percentiles (usec): 00:43:09.930 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10552], 00:43:09.930 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:43:09.930 | 70.00th=[12125], 80.00th=[12649], 90.00th=[19792], 95.00th=[29492], 00:43:09.930 | 99.00th=[79168], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:43:09.930 | 99.99th=[89654] 00:43:09.930 write: IOPS=5037, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1002msec); 0 zone resets 00:43:09.930 slat (usec): min=2, max=4319, avg=89.52, stdev=442.59 00:43:09.930 clat (usec): min=402, max=20192, avg=11766.91, stdev=2357.64 00:43:09.930 lat (usec): min=2824, max=20221, avg=11856.43, stdev=2334.68 00:43:09.930 clat percentiles (usec): 00:43:09.930 | 1.00th=[ 6652], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10683], 00:43:09.930 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:43:09.930 | 70.00th=[11600], 80.00th=[11994], 90.00th=[16712], 95.00th=[17171], 00:43:09.930 | 99.00th=[17957], 99.50th=[19268], 99.90th=[20055], 99.95th=[20317], 00:43:09.930 | 99.99th=[20317] 00:43:09.930 bw ( KiB/s): min=16384, max=22976, per=29.09%, avg=19680.00, stdev=4661.25, samples=2 00:43:09.930 iops : min= 4096, max= 5744, avg=4920.00, stdev=1165.31, samples=2 00:43:09.930 lat (usec) : 500=0.01% 00:43:09.930 lat (msec) : 4=0.39%, 10=8.48%, 20=86.42%, 50=3.04%, 100=1.65% 00:43:09.930 cpu : usr=3.70%, sys=4.70%, ctx=491, majf=0, minf=1 00:43:09.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:43:09.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:09.930 issued rwts: total=4608,5048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:09.930 job2: (groupid=0, jobs=1): err= 0: pid=3975496: Tue Dec 10 12:46:16 2024 00:43:09.930 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:43:09.930 slat (nsec): min=1508, max=12835k, avg=133394.29, stdev=921710.19 00:43:09.930 clat (usec): min=4968, max=38341, avg=16817.33, stdev=5830.98 00:43:09.930 lat (usec): min=4979, max=38367, avg=16950.72, stdev=5902.01 00:43:09.930 clat percentiles (usec): 00:43:09.930 | 1.00th=[ 8225], 5.00th=[ 8586], 10.00th=[13042], 20.00th=[13173], 00:43:09.930 | 30.00th=[13435], 40.00th=[14222], 50.00th=[15139], 60.00th=[15926], 00:43:09.930 | 70.00th=[16581], 80.00th=[20579], 90.00th=[26084], 95.00th=[31327], 00:43:09.930 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:43:09.930 | 99.99th=[38536] 00:43:09.930 write: IOPS=3349, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1012msec); 0 zone resets 00:43:09.930 slat (usec): min=2, max=45978, avg=159.97, stdev=1134.20 00:43:09.930 clat (usec): min=2806, max=64967, avg=19927.20, stdev=9704.87 00:43:09.930 lat (usec): min=2817, max=83411, avg=20087.18, stdev=9827.68 00:43:09.930 clat percentiles (usec): 00:43:09.930 | 1.00th=[ 4113], 5.00th=[ 6718], 10.00th=[ 9896], 20.00th=[11469], 00:43:09.930 | 30.00th=[13304], 40.00th=[15664], 50.00th=[16581], 60.00th=[21627], 00:43:09.930 | 70.00th=[24773], 80.00th=[29754], 90.00th=[33817], 95.00th=[35914], 00:43:09.930 | 99.00th=[42206], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:43:09.930 | 99.99th=[64750] 00:43:09.930 bw ( KiB/s): min=12288, max=13816, per=19.29%, avg=13052.00, stdev=1080.46, samples=2 00:43:09.930 iops : min= 3072, max= 3454, avg=3263.00, stdev=270.11, samples=2 00:43:09.930 lat (msec) : 4=0.31%, 10=7.97%, 20=58.85%, 50=32.54%, 100=0.32% 00:43:09.930 cpu : usr=3.07%, sys=4.06%, ctx=308, majf=0, minf=1 00:43:09.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:43:09.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:09.930 issued rwts: total=3072,3390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:09.930 job3: (groupid=0, jobs=1): err= 0: pid=3975497: Tue Dec 10 12:46:16 2024 00:43:09.930 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:43:09.930 slat (nsec): min=1076, max=10532k, avg=102023.93, stdev=743405.47 00:43:09.930 clat (usec): min=4282, max=33502, avg=13065.92, stdev=3852.27 00:43:09.930 lat (usec): min=4289, max=33522, avg=13167.94, stdev=3905.06 00:43:09.930 clat percentiles (usec): 00:43:09.930 | 1.00th=[ 5800], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10814], 00:43:09.930 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12518], 00:43:09.930 | 70.00th=[13698], 80.00th=[15795], 90.00th=[18744], 95.00th=[20841], 00:43:09.930 | 99.00th=[25822], 99.50th=[26084], 99.90th=[29754], 99.95th=[33424], 00:43:09.930 | 99.99th=[33424] 00:43:09.930 write: IOPS=4900, BW=19.1MiB/s (20.1MB/s)(19.3MiB/1010msec); 0 zone resets 00:43:09.930 slat (usec): min=2, max=9637, avg=96.49, stdev=586.63 00:43:09.930 clat (usec): min=527, max=44246, avg=13737.79, stdev=9265.86 00:43:09.930 lat (usec): min=535, max=44252, avg=13834.28, stdev=9330.07 00:43:09.930 clat percentiles (usec): 00:43:09.930 | 1.00th=[ 1991], 5.00th=[ 5473], 10.00th=[ 6980], 20.00th=[ 8717], 00:43:09.930 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11469], 60.00th=[12125], 00:43:09.930 | 70.00th=[12387], 80.00th=[14746], 90.00th=[26608], 95.00th=[42206], 00:43:09.930 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:43:09.930 | 99.99th=[44303] 00:43:09.930 bw ( KiB/s): min=18488, max=20096, per=28.52%, avg=19292.00, stdev=1137.03, samples=2 00:43:09.930 iops : min= 4622, max= 5024, avg=4823.00, stdev=284.26, samples=2 00:43:09.930 lat (usec) : 750=0.06%, 1000=0.02% 00:43:09.930 lat (msec) : 2=0.46%, 4=0.89%, 10=22.50%, 20=66.64%, 50=9.43% 00:43:09.930 cpu : usr=2.97%, sys=5.25%, ctx=420, majf=0, minf=2 00:43:09.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:43:09.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:09.930 issued rwts: total=4608,4950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:09.930 00:43:09.930 Run status group 0 (all jobs): 00:43:09.930 READ: bw=61.3MiB/s (64.2MB/s), 11.9MiB/s-18.0MiB/s (12.4MB/s-18.8MB/s), io=62.0MiB (65.0MB), run=1002-1012msec 00:43:09.930 WRITE: bw=66.1MiB/s (69.3MB/s), 13.1MiB/s-19.7MiB/s (13.7MB/s-20.6MB/s), io=66.9MiB (70.1MB), run=1002-1012msec 00:43:09.930 00:43:09.930 Disk stats (read/write): 00:43:09.930 nvme0n1: ios=3122/3551, merge=0/0, ticks=11089/15539, in_queue=26628, util=87.27% 00:43:09.930 nvme0n2: ios=3952/4096, merge=0/0, ticks=15252/11736, in_queue=26988, util=97.16% 00:43:09.930 nvme0n3: ios=2581/2863, merge=0/0, ticks=29257/46506, in_queue=75763, util=95.64% 00:43:09.930 nvme0n4: ios=3834/4096, merge=0/0, ticks=34393/40166, in_queue=74559, util=95.40% 00:43:09.930 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:09.930 [global] 00:43:09.930 thread=1 00:43:09.930 invalidate=1 00:43:09.930 rw=randwrite 00:43:09.930 time_based=1 00:43:09.930 runtime=1 00:43:09.930 ioengine=libaio 00:43:09.930 direct=1 00:43:09.930 bs=4096 00:43:09.930 iodepth=128 00:43:09.930 norandommap=0 00:43:09.930 numjobs=1 00:43:09.930 00:43:09.930 verify_dump=1 00:43:09.930 verify_backlog=512 00:43:09.930 verify_state_save=0 00:43:09.930 do_verify=1 00:43:09.930 verify=crc32c-intel 00:43:09.930 [job0] 00:43:09.930 filename=/dev/nvme0n1 00:43:09.930 [job1] 00:43:09.930 filename=/dev/nvme0n2 00:43:09.930 [job2] 00:43:09.930 filename=/dev/nvme0n3 00:43:09.930 [job3] 00:43:09.930 filename=/dev/nvme0n4 00:43:09.930 Could not set queue depth (nvme0n1) 00:43:09.930 Could not set queue depth (nvme0n2) 00:43:09.930 Could not set queue depth (nvme0n3) 00:43:09.930 Could not set queue depth (nvme0n4) 00:43:10.189 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:10.189 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:10.189 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:10.189 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:10.189 fio-3.35 00:43:10.189 Starting 4 threads 00:43:11.559 00:43:11.559 job0: (groupid=0, jobs=1): err= 0: pid=3975855: Tue Dec 10 12:46:18 2024 00:43:11.559 read: IOPS=4262, BW=16.6MiB/s (17.5MB/s)(17.4MiB/1048msec) 00:43:11.559 slat (nsec): min=1190, max=10208k, avg=90109.35, stdev=666042.08 00:43:11.559 clat (usec): min=2283, max=55645, avg=12694.50, stdev=7451.66 00:43:11.559 lat (usec): min=2305, max=60876, avg=12784.61, stdev=7485.40 00:43:11.559 clat percentiles (usec): 00:43:11.559 | 1.00th=[ 6063], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9634], 00:43:11.559 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[11469], 00:43:11.559 | 70.00th=[12387], 80.00th=[13304], 90.00th=[17171], 95.00th=[22152], 00:43:11.559 | 99.00th=[51643], 99.50th=[51643], 99.90th=[55837], 99.95th=[55837], 00:43:11.559 | 99.99th=[55837] 00:43:11.559 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1048msec); 0 zone resets 00:43:11.559 slat (usec): min=2, max=10541, avg=118.87, stdev=763.91 00:43:11.559 clat (usec): min=1383, max=72461, avg=16477.41, stdev=13204.43 00:43:11.559 lat (usec): min=1398, max=72469, avg=16596.28, stdev=13294.85 00:43:11.559 clat percentiles (usec): 00:43:11.559 | 1.00th=[ 3982], 5.00th=[ 7177], 10.00th=[ 8356], 20.00th=[ 8848], 00:43:11.559 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11207], 60.00th=[11863], 00:43:11.559 | 70.00th=[14877], 80.00th=[19006], 90.00th=[38536], 95.00th=[52167], 00:43:11.559 | 99.00th=[63177], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:43:11.559 | 99.99th=[72877] 00:43:11.559 bw ( KiB/s): min=14544, max=22320, per=30.00%, avg=18432.00, stdev=5498.46, samples=2 00:43:11.559 iops : min= 3636, max= 5580, avg=4608.00, stdev=1374.62, samples=2 00:43:11.559 lat (msec) : 2=0.11%, 4=0.77%, 10=33.86%, 20=53.09%, 50=7.74% 00:43:11.559 lat (msec) : 100=4.43% 00:43:11.559 cpu : usr=4.20%, sys=4.58%, ctx=333, majf=0, minf=1 00:43:11.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:11.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:11.559 issued rwts: total=4467,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:11.559 job1: (groupid=0, jobs=1): err= 0: pid=3975856: Tue Dec 10 12:46:18 2024 00:43:11.559 read: IOPS=3876, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1003msec) 00:43:11.559 slat (nsec): min=1589, max=11347k, avg=113362.89, stdev=773485.22 00:43:11.559 clat (usec): min=1449, max=45296, avg=13069.06, stdev=5432.31 00:43:11.559 lat (usec): min=4315, max=45301, avg=13182.43, stdev=5502.82 00:43:11.559 clat percentiles (usec): 00:43:11.559 | 1.00th=[ 5145], 5.00th=[ 7439], 10.00th=[ 8455], 20.00th=[ 9896], 00:43:11.559 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[12518], 00:43:11.559 | 70.00th=[13960], 80.00th=[15664], 90.00th=[19006], 95.00th=[21890], 00:43:11.559 | 99.00th=[37487], 99.50th=[40633], 99.90th=[45351], 99.95th=[45351], 00:43:11.559 | 99.99th=[45351] 00:43:11.559 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:43:11.559 slat (usec): min=2, max=13661, avg=126.15, stdev=718.57 00:43:11.559 clat (usec): min=1151, max=51362, avg=18588.88, stdev=11792.31 00:43:11.559 lat (usec): min=1162, max=51374, avg=18715.03, stdev=11865.38 00:43:11.559 clat percentiles (usec): 00:43:11.559 | 1.00th=[ 2638], 5.00th=[ 5538], 10.00th=[ 7439], 20.00th=[ 9503], 00:43:11.559 | 30.00th=[10814], 40.00th=[12125], 50.00th=[13829], 60.00th=[16581], 00:43:11.559 | 70.00th=[22938], 80.00th=[31589], 90.00th=[35390], 95.00th=[41157], 00:43:11.559 | 99.00th=[49546], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:43:11.559 | 99.99th=[51119] 00:43:11.559 bw ( KiB/s): min=13944, max=18824, per=26.67%, avg=16384.00, stdev=3450.68, samples=2 00:43:11.559 iops : min= 3486, max= 4706, avg=4096.00, stdev=862.67, samples=2 00:43:11.559 lat (msec) : 2=0.15%, 4=0.95%, 10=21.04%, 20=57.79%, 50=19.69% 00:43:11.559 lat (msec) : 100=0.38% 00:43:11.559 cpu : usr=4.39%, sys=4.79%, ctx=351, majf=0, minf=2 00:43:11.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:11.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:11.560 issued rwts: total=3888,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:11.560 job2: (groupid=0, jobs=1): err= 0: pid=3975857: Tue Dec 10 12:46:18 2024 00:43:11.560 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:43:11.560 slat (nsec): min=1280, max=20553k, avg=117028.98, stdev=910017.01 00:43:11.560 clat (usec): min=2041, max=70253, avg=17037.07, stdev=11380.51 00:43:11.560 lat (usec): min=2049, max=70277, avg=17154.10, stdev=11476.31 00:43:11.560 clat percentiles (usec): 00:43:11.560 | 1.00th=[ 2180], 5.00th=[ 6783], 10.00th=[ 7898], 20.00th=[11338], 00:43:11.560 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13304], 60.00th=[14353], 00:43:11.560 | 70.00th=[16319], 80.00th=[18482], 90.00th=[32113], 95.00th=[48497], 00:43:11.560 | 99.00th=[60556], 99.50th=[61080], 99.90th=[63701], 99.95th=[66847], 00:43:11.560 | 99.99th=[69731] 00:43:11.560 write: IOPS=3813, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec); 0 zone resets 00:43:11.560 slat (usec): min=2, max=18561, avg=117.68, stdev=934.27 00:43:11.560 clat (usec): min=940, max=63615, avg=17305.02, stdev=11483.68 00:43:11.560 lat (usec): min=949, max=63628, avg=17422.70, stdev=11585.27 00:43:11.560 clat percentiles (usec): 00:43:11.560 | 1.00th=[ 2638], 5.00th=[ 6128], 10.00th=[ 7898], 20.00th=[10552], 00:43:11.560 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13042], 60.00th=[14222], 00:43:11.560 | 70.00th=[15401], 80.00th=[20579], 90.00th=[38011], 95.00th=[44303], 00:43:11.560 | 99.00th=[49021], 99.50th=[53216], 99.90th=[57934], 99.95th=[59507], 00:43:11.560 | 99.99th=[63701] 00:43:11.560 bw ( KiB/s): min=11944, max=17728, per=24.15%, avg=14836.00, stdev=4089.91, samples=2 00:43:11.560 iops : min= 2986, max= 4432, avg=3709.00, stdev=1022.48, samples=2 00:43:11.560 lat (usec) : 1000=0.04% 00:43:11.560 lat (msec) : 2=0.24%, 4=2.29%, 10=12.90%, 20=65.07%, 50=17.21% 00:43:11.560 lat (msec) : 100=2.25% 00:43:11.560 cpu : usr=2.89%, sys=4.18%, ctx=295, majf=0, minf=1 00:43:11.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:11.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:11.560 issued rwts: total=3584,3836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:11.560 job3: (groupid=0, jobs=1): err= 0: pid=3975858: Tue Dec 10 12:46:18 2024 00:43:11.560 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:43:11.560 slat (nsec): min=1713, max=11641k, avg=110289.89, stdev=786813.14 00:43:11.560 clat (usec): min=3622, max=32921, avg=13503.82, stdev=4227.14 00:43:11.560 lat (usec): min=3631, max=32926, avg=13614.11, stdev=4280.48 00:43:11.560 clat percentiles (usec): 00:43:11.560 | 1.00th=[ 7504], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[10814], 00:43:11.560 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12125], 60.00th=[12911], 00:43:11.560 | 70.00th=[14222], 80.00th=[16057], 90.00th=[19268], 95.00th=[22414], 00:43:11.560 | 99.00th=[28181], 99.50th=[30278], 99.90th=[32900], 99.95th=[32900], 00:43:11.560 | 99.99th=[32900] 00:43:11.560 write: IOPS=3515, BW=13.7MiB/s (14.4MB/s)(13.9MiB/1012msec); 0 zone resets 00:43:11.560 slat (usec): min=2, max=11614, avg=178.88, stdev=969.45 00:43:11.560 clat (usec): min=1750, max=109495, avg=24349.80, stdev=21818.29 00:43:11.560 lat (usec): min=1759, max=109509, avg=24528.67, stdev=21947.59 00:43:11.560 clat percentiles (msec): 00:43:11.560 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:43:11.560 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 19], 00:43:11.560 | 70.00th=[ 23], 80.00th=[ 37], 90.00th=[ 48], 95.00th=[ 85], 00:43:11.560 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 110], 99.95th=[ 110], 00:43:11.560 | 99.99th=[ 110] 00:43:11.560 bw ( KiB/s): min=11056, max=16384, per=22.33%, avg=13720.00, stdev=3767.46, samples=2 00:43:11.560 iops : min= 2764, max= 4096, avg=3430.00, stdev=941.87, samples=2 00:43:11.560 lat (msec) : 2=0.03%, 4=0.32%, 10=13.08%, 20=64.62%, 50=17.06% 00:43:11.560 lat (msec) : 100=4.21%, 250=0.69% 00:43:11.560 cpu : usr=2.67%, sys=4.25%, ctx=320, majf=0, minf=1 00:43:11.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:43:11.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:11.560 issued rwts: total=3072,3558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:11.560 00:43:11.560 Run status group 0 (all jobs): 00:43:11.560 READ: bw=56.0MiB/s (58.7MB/s), 11.9MiB/s-16.6MiB/s (12.4MB/s-17.5MB/s), io=58.6MiB (61.5MB), run=1003-1048msec 00:43:11.560 WRITE: bw=60.0MiB/s (62.9MB/s), 13.7MiB/s-17.2MiB/s (14.4MB/s-18.0MB/s), io=62.9MiB (65.9MB), run=1003-1048msec 00:43:11.560 00:43:11.560 Disk stats (read/write): 00:43:11.560 nvme0n1: ios=3863/4096, merge=0/0, ticks=43362/61750, in_queue=105112, util=99.20% 00:43:11.560 nvme0n2: ios=3360/3584, merge=0/0, ticks=43575/62280, in_queue=105855, util=96.75% 00:43:11.560 nvme0n3: ios=3093/3313, merge=0/0, ticks=31130/30204, in_queue=61334, util=97.92% 00:43:11.560 nvme0n4: ios=2594/2927, merge=0/0, ticks=34261/66124, in_queue=100385, util=96.96% 00:43:11.560 12:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:11.560 12:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3976084 00:43:11.560 12:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:11.560 12:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:11.560 [global] 00:43:11.560 thread=1 00:43:11.560 invalidate=1 00:43:11.560 rw=read 00:43:11.560 time_based=1 00:43:11.560 runtime=10 00:43:11.560 ioengine=libaio 00:43:11.560 direct=1 00:43:11.560 bs=4096 00:43:11.560 iodepth=1 00:43:11.560 norandommap=1 00:43:11.560 numjobs=1 00:43:11.560 00:43:11.560 [job0] 00:43:11.560 filename=/dev/nvme0n1 00:43:11.560 [job1] 00:43:11.560 filename=/dev/nvme0n2 00:43:11.560 [job2] 00:43:11.560 filename=/dev/nvme0n3 00:43:11.560 [job3] 00:43:11.560 filename=/dev/nvme0n4 00:43:11.560 Could not set queue depth (nvme0n1) 00:43:11.560 Could not set queue depth (nvme0n2) 00:43:11.560 Could not set queue depth (nvme0n3) 00:43:11.560 Could not set queue depth (nvme0n4) 00:43:11.560 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:11.560 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:11.560 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:11.560 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:11.560 fio-3.35 00:43:11.560 Starting 4 threads 00:43:14.834 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:14.834 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37642240, buflen=4096 00:43:14.834 fio: pid=3976229, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:14.834 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:14.834 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38793216, buflen=4096 00:43:14.834 fio: pid=3976228, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:14.834 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:14.834 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:15.091 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=18587648, buflen=4096 00:43:15.091 fio: pid=3976226, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:15.091 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:15.091 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:15.348 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50774016, buflen=4096 00:43:15.348 fio: pid=3976227, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:15.348 00:43:15.348 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3976226: Tue Dec 10 12:46:22 2024 00:43:15.348 read: IOPS=1427, BW=5708KiB/s (5845kB/s)(17.7MiB/3180msec) 00:43:15.348 slat (usec): min=6, max=19760, avg=11.99, stdev=293.20 00:43:15.348 clat (usec): min=233, max=42033, avg=682.02, stdev=3811.17 00:43:15.348 lat (usec): min=240, max=61118, avg=694.01, stdev=3869.91 00:43:15.348 clat percentiles (usec): 00:43:15.348 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 281], 00:43:15.348 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 330], 00:43:15.348 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 420], 00:43:15.348 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:43:15.348 | 99.99th=[42206] 00:43:15.348 bw ( KiB/s): min= 96, max=12256, per=14.56%, avg=6044.50, stdev=6513.95, samples=6 00:43:15.348 iops : min= 24, max= 3064, avg=1511.00, stdev=1628.62, samples=6 00:43:15.348 lat (usec) : 250=0.40%, 500=98.06%, 750=0.64% 00:43:15.348 lat (msec) : 50=0.88% 00:43:15.348 cpu : usr=0.38%, sys=1.35%, ctx=4542, majf=0, minf=1 00:43:15.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.348 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.348 issued rwts: total=4539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:15.348 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3976227: Tue Dec 10 12:46:22 2024 00:43:15.348 read: IOPS=3613, BW=14.1MiB/s (14.8MB/s)(48.4MiB/3431msec) 00:43:15.348 slat (usec): min=6, max=31949, avg=17.04, stdev=382.27 00:43:15.348 clat (usec): min=184, max=3393, avg=256.86, stdev=42.44 00:43:15.348 lat (usec): min=212, max=32410, avg=273.90, stdev=387.93 00:43:15.348 clat percentiles (usec): 00:43:15.348 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 241], 00:43:15.348 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:43:15.348 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:43:15.348 | 99.00th=[ 359], 99.50th=[ 404], 99.90th=[ 486], 99.95th=[ 619], 00:43:15.348 | 99.99th=[ 1516] 00:43:15.348 bw ( KiB/s): min=13904, max=15216, per=35.06%, avg=14551.50, stdev=499.47, samples=6 00:43:15.348 iops : min= 3476, max= 3804, avg=3637.83, stdev=124.90, samples=6 00:43:15.348 lat (usec) : 250=44.61%, 500=55.29%, 750=0.07% 00:43:15.348 lat (msec) : 2=0.02%, 4=0.01% 00:43:15.348 cpu : usr=1.84%, sys=6.30%, ctx=12405, majf=0, minf=2 00:43:15.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.348 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.348 issued rwts: total=12397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:15.348 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3976228: Tue Dec 10 12:46:22 2024 00:43:15.348 read: IOPS=3214, BW=12.6MiB/s (13.2MB/s)(37.0MiB/2947msec) 00:43:15.348 slat (nsec): min=6297, max=36523, avg=7596.27, stdev=1579.26 00:43:15.348 clat (usec): min=220, max=809, avg=300.09, stdev=46.01 00:43:15.348 lat (usec): min=228, max=816, avg=307.68, stdev=46.15 00:43:15.348 clat percentiles (usec): 00:43:15.348 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:43:15.348 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:43:15.348 | 70.00th=[ 306], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 388], 00:43:15.348 | 99.00th=[ 474], 99.50th=[ 506], 99.90th=[ 529], 99.95th=[ 578], 00:43:15.348 | 99.99th=[ 807] 00:43:15.348 bw ( KiB/s): min=11856, max=14008, per=30.65%, avg=12720.00, stdev=1111.76, samples=5 00:43:15.348 iops : min= 2964, max= 3502, avg=3180.00, stdev=277.94, samples=5 00:43:15.348 lat (usec) : 250=1.69%, 500=97.58%, 750=0.70%, 1000=0.02% 00:43:15.348 cpu : usr=0.78%, sys=3.05%, ctx=9472, majf=0, minf=2 00:43:15.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.348 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.348 issued rwts: total=9472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:15.348 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3976229: Tue Dec 10 12:46:22 2024 00:43:15.348 read: IOPS=3354, BW=13.1MiB/s (13.7MB/s)(35.9MiB/2740msec) 00:43:15.348 slat (nsec): min=6529, max=73980, avg=8019.21, stdev=1537.55 00:43:15.348 clat (usec): min=207, max=1988, avg=286.05, stdev=43.80 00:43:15.348 lat (usec): min=228, max=1996, avg=294.07, stdev=43.95 00:43:15.348 clat percentiles (usec): 00:43:15.348 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 269], 00:43:15.348 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:43:15.348 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 322], 00:43:15.348 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 783], 99.95th=[ 1516], 00:43:15.348 | 99.99th=[ 1991] 00:43:15.348 bw ( KiB/s): min=12680, max=13896, per=32.42%, avg=13452.80, stdev=482.78, samples=5 00:43:15.348 iops : min= 3170, max= 3474, avg=3363.20, stdev=120.69, samples=5 00:43:15.348 lat (usec) : 250=3.08%, 500=96.73%, 750=0.08%, 1000=0.05% 00:43:15.348 lat (msec) : 2=0.05% 00:43:15.348 cpu : usr=1.97%, sys=4.45%, ctx=9192, majf=0, minf=2 00:43:15.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.348 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.348 issued rwts: total=9191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:15.348 00:43:15.348 Run status group 0 (all jobs): 00:43:15.348 READ: bw=40.5MiB/s (42.5MB/s), 5708KiB/s-14.1MiB/s (5845kB/s-14.8MB/s), io=139MiB (146MB), run=2740-3431msec 00:43:15.348 00:43:15.348 Disk stats (read/write): 00:43:15.348 nvme0n1: ios=4575/0, merge=0/0, ticks=3697/0, in_queue=3697, util=98.64% 00:43:15.348 nvme0n2: ios=12107/0, merge=0/0, ticks=2959/0, in_queue=2959, util=93.79% 00:43:15.348 nvme0n3: ios=9218/0, merge=0/0, ticks=2691/0, in_queue=2691, util=96.52% 00:43:15.348 nvme0n4: ios=8773/0, merge=0/0, ticks=2418/0, in_queue=2418, util=96.45% 00:43:15.348 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:15.348 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:15.605 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:15.605 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:15.862 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:15.862 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:16.119 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:16.119 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:16.375 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:16.375 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:16.632 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:16.632 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3976084 00:43:16.632 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:16.632 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:17.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:17.561 nvmf hotplug test: fio failed as expected 00:43:17.561 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:17.818 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:17.818 rmmod nvme_tcp 00:43:17.818 rmmod nvme_fabrics 00:43:17.818 rmmod nvme_keyring 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3973438 ']' 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3973438 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3973438 ']' 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3973438 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3973438 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3973438' 00:43:18.075 killing process with pid 3973438 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3973438 00:43:18.075 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3973438 00:43:19.007 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:19.007 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:19.007 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:19.007 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:19.007 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:19.007 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:19.007 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:19.264 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:19.264 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:19.264 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:19.264 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:19.264 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.163 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:21.163 00:43:21.163 real 0m29.010s 00:43:21.163 user 1m37.848s 00:43:21.163 sys 0m11.778s 00:43:21.163 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:21.163 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:21.163 ************************************ 00:43:21.163 END TEST nvmf_fio_target 00:43:21.163 ************************************ 00:43:21.163 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:21.163 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:21.163 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:21.163 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:21.163 ************************************ 00:43:21.163 START TEST nvmf_bdevio 00:43:21.163 ************************************ 00:43:21.163 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:21.422 * Looking for test storage... 00:43:21.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:21.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.422 --rc genhtml_branch_coverage=1 00:43:21.422 --rc genhtml_function_coverage=1 00:43:21.422 --rc genhtml_legend=1 00:43:21.422 --rc geninfo_all_blocks=1 00:43:21.422 --rc geninfo_unexecuted_blocks=1 00:43:21.422 00:43:21.422 ' 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:21.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.422 --rc genhtml_branch_coverage=1 00:43:21.422 --rc genhtml_function_coverage=1 00:43:21.422 --rc genhtml_legend=1 00:43:21.422 --rc geninfo_all_blocks=1 00:43:21.422 --rc geninfo_unexecuted_blocks=1 00:43:21.422 00:43:21.422 ' 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:21.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.422 --rc genhtml_branch_coverage=1 00:43:21.422 --rc genhtml_function_coverage=1 00:43:21.422 --rc genhtml_legend=1 00:43:21.422 --rc geninfo_all_blocks=1 00:43:21.422 --rc geninfo_unexecuted_blocks=1 00:43:21.422 00:43:21.422 ' 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:21.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.422 --rc genhtml_branch_coverage=1 00:43:21.422 --rc genhtml_function_coverage=1 00:43:21.422 --rc genhtml_legend=1 00:43:21.422 --rc geninfo_all_blocks=1 00:43:21.422 --rc geninfo_unexecuted_blocks=1 00:43:21.422 00:43:21.422 ' 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:21.422 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:21.423 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:26.682 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:26.682 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:26.682 Found net devices under 0000:af:00.0: cvl_0_0 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:26.682 Found net devices under 0000:af:00.1: cvl_0_1 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:26.682 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:26.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:26.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:43:26.683 00:43:26.683 --- 10.0.0.2 ping statistics --- 00:43:26.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:26.683 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:26.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:26.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:43:26.683 00:43:26.683 --- 10.0.0.1 ping statistics --- 00:43:26.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:26.683 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3980741 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3980741 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3980741 ']' 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:26.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:26.683 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:26.940 [2024-12-10 12:46:33.521767] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:26.940 [2024-12-10 12:46:33.523824] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:43:26.940 [2024-12-10 12:46:33.523892] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:26.940 [2024-12-10 12:46:33.640841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:26.940 [2024-12-10 12:46:33.750647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:26.940 [2024-12-10 12:46:33.750686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:26.940 [2024-12-10 12:46:33.750698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:26.940 [2024-12-10 12:46:33.750707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:26.940 [2024-12-10 12:46:33.750716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:26.940 [2024-12-10 12:46:33.752940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:26.940 [2024-12-10 12:46:33.753031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:43:26.940 [2024-12-10 12:46:33.753092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:26.940 [2024-12-10 12:46:33.753117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:43:27.506 [2024-12-10 12:46:34.063603] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:27.506 [2024-12-10 12:46:34.064543] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:27.506 [2024-12-10 12:46:34.065851] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:27.506 [2024-12-10 12:46:34.066323] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:27.506 [2024-12-10 12:46:34.066572] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.764 [2024-12-10 12:46:34.374212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.764 Malloc0 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.764 [2024-12-10 12:46:34.498214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:27.764 { 00:43:27.764 "params": { 00:43:27.764 "name": "Nvme$subsystem", 00:43:27.764 "trtype": "$TEST_TRANSPORT", 00:43:27.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:27.764 "adrfam": "ipv4", 00:43:27.764 "trsvcid": "$NVMF_PORT", 00:43:27.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:27.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:27.764 "hdgst": ${hdgst:-false}, 00:43:27.764 "ddgst": ${ddgst:-false} 00:43:27.764 }, 00:43:27.764 "method": "bdev_nvme_attach_controller" 00:43:27.764 } 00:43:27.764 EOF 00:43:27.764 )") 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:27.764 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:27.764 "params": { 00:43:27.764 "name": "Nvme1", 00:43:27.764 "trtype": "tcp", 00:43:27.764 "traddr": "10.0.0.2", 00:43:27.764 "adrfam": "ipv4", 00:43:27.764 "trsvcid": "4420", 00:43:27.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:27.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:27.764 "hdgst": false, 00:43:27.764 "ddgst": false 00:43:27.764 }, 00:43:27.764 "method": "bdev_nvme_attach_controller" 00:43:27.764 }' 00:43:27.764 [2024-12-10 12:46:34.573324] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:43:27.764 [2024-12-10 12:46:34.573408] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980855 ] 00:43:28.021 [2024-12-10 12:46:34.688297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:28.021 [2024-12-10 12:46:34.806147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:28.021 [2024-12-10 12:46:34.806215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:28.021 [2024-12-10 12:46:34.806219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:28.583 I/O targets: 00:43:28.583 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:28.583 00:43:28.583 00:43:28.583 CUnit - A unit testing framework for C - Version 2.1-3 00:43:28.583 http://cunit.sourceforge.net/ 00:43:28.583 00:43:28.583 00:43:28.583 Suite: bdevio tests on: Nvme1n1 00:43:28.583 Test: blockdev write read block ...passed 00:43:28.583 Test: blockdev write zeroes read block ...passed 00:43:28.583 Test: blockdev write zeroes read no split ...passed 00:43:28.583 Test: blockdev write zeroes read split ...passed 00:43:28.838 Test: blockdev write zeroes read split partial ...passed 00:43:28.838 Test: blockdev reset ...[2024-12-10 12:46:35.490976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:28.838 [2024-12-10 12:46:35.491078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:43:28.838 [2024-12-10 12:46:35.498147] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:28.838 passed 00:43:28.838 Test: blockdev write read 8 blocks ...passed 00:43:28.838 Test: blockdev write read size > 128k ...passed 00:43:28.838 Test: blockdev write read invalid size ...passed 00:43:28.838 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:28.838 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:28.838 Test: blockdev write read max offset ...passed 00:43:29.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:29.094 Test: blockdev writev readv 8 blocks ...passed 00:43:29.094 Test: blockdev writev readv 30 x 1block ...passed 00:43:29.094 Test: blockdev writev readv block ...passed 00:43:29.094 Test: blockdev writev readv size > 128k ...passed 00:43:29.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:29.094 Test: blockdev comparev and writev ...[2024-12-10 12:46:35.753845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:29.094 [2024-12-10 12:46:35.753887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.094 [2024-12-10 12:46:35.753907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:29.094 [2024-12-10 12:46:35.753918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:29.094 [2024-12-10 12:46:35.754277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:29.094 [2024-12-10 12:46:35.754296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:29.094 [2024-12-10 12:46:35.754312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:29.094 [2024-12-10 12:46:35.754323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:29.094 [2024-12-10 12:46:35.754675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:29.094 [2024-12-10 12:46:35.754690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:29.094 [2024-12-10 12:46:35.754712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:29.094 [2024-12-10 12:46:35.754723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:29.094 [2024-12-10 12:46:35.755053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:29.094 [2024-12-10 12:46:35.755069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:29.094 [2024-12-10 12:46:35.755090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:29.094 [2024-12-10 12:46:35.755102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:29.094 passed 00:43:29.094 Test: blockdev nvme passthru rw ...passed 00:43:29.094 Test: blockdev nvme passthru vendor specific ...[2024-12-10 12:46:35.837619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:29.094 [2024-12-10 12:46:35.837647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:29.095 [2024-12-10 12:46:35.837787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:29.095 [2024-12-10 12:46:35.837801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:29.095 [2024-12-10 12:46:35.837934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:29.095 [2024-12-10 12:46:35.837951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:29.095 [2024-12-10 12:46:35.838077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:29.095 [2024-12-10 12:46:35.838090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:29.095 passed 00:43:29.095 Test: blockdev nvme admin passthru ...passed 00:43:29.095 Test: blockdev copy ...passed 00:43:29.095 00:43:29.095 Run Summary: Type Total Ran Passed Failed Inactive 00:43:29.095 suites 1 1 n/a 0 0 00:43:29.095 tests 23 23 23 0 0 00:43:29.095 asserts 152 152 152 0 n/a 00:43:29.095 00:43:29.095 Elapsed time = 1.313 seconds 00:43:30.025 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:30.025 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.025 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.025 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.025 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:30.025 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:30.025 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:30.025 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:30.026 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:30.026 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:30.026 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:30.026 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:30.026 rmmod nvme_tcp 00:43:30.026 rmmod nvme_fabrics 00:43:30.026 rmmod nvme_keyring 00:43:30.026 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:30.026 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:30.026 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:30.026 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3980741 ']' 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3980741 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3980741 ']' 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3980741 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980741 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980741' 00:43:30.282 killing process with pid 3980741 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3980741 00:43:30.282 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3980741 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:31.654 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:33.552 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:33.552 00:43:33.552 real 0m12.285s 00:43:33.552 user 0m17.585s 00:43:33.552 sys 0m5.241s 00:43:33.552 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:33.552 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:33.552 ************************************ 00:43:33.552 END TEST nvmf_bdevio 00:43:33.552 ************************************ 00:43:33.552 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:33.552 00:43:33.552 real 4m57.353s 00:43:33.552 user 10m5.912s 00:43:33.552 sys 1m50.756s 00:43:33.552 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:33.552 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:33.552 ************************************ 00:43:33.552 END TEST nvmf_target_core_interrupt_mode 00:43:33.552 ************************************ 00:43:33.552 12:46:40 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:33.552 12:46:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:33.552 12:46:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:33.552 12:46:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:33.811 ************************************ 00:43:33.811 START TEST nvmf_interrupt 00:43:33.811 ************************************ 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:33.811 * Looking for test storage... 00:43:33.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:33.811 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:33.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:33.812 --rc genhtml_branch_coverage=1 00:43:33.812 --rc genhtml_function_coverage=1 00:43:33.812 --rc genhtml_legend=1 00:43:33.812 --rc geninfo_all_blocks=1 00:43:33.812 --rc geninfo_unexecuted_blocks=1 00:43:33.812 00:43:33.812 ' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:33.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:33.812 --rc genhtml_branch_coverage=1 00:43:33.812 --rc genhtml_function_coverage=1 00:43:33.812 --rc genhtml_legend=1 00:43:33.812 --rc geninfo_all_blocks=1 00:43:33.812 --rc geninfo_unexecuted_blocks=1 00:43:33.812 00:43:33.812 ' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:33.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:33.812 --rc genhtml_branch_coverage=1 00:43:33.812 --rc genhtml_function_coverage=1 00:43:33.812 --rc genhtml_legend=1 00:43:33.812 --rc geninfo_all_blocks=1 00:43:33.812 --rc geninfo_unexecuted_blocks=1 00:43:33.812 00:43:33.812 ' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:33.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:33.812 --rc genhtml_branch_coverage=1 00:43:33.812 --rc genhtml_function_coverage=1 00:43:33.812 --rc genhtml_legend=1 00:43:33.812 --rc geninfo_all_blocks=1 00:43:33.812 --rc geninfo_unexecuted_blocks=1 00:43:33.812 00:43:33.812 ' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:33.812 12:46:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:39.073 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:39.073 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:39.073 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:39.074 Found net devices under 0000:af:00.0: cvl_0_0 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:39.074 Found net devices under 0000:af:00.1: cvl_0_1 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:39.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:39.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:43:39.074 00:43:39.074 --- 10.0.0.2 ping statistics --- 00:43:39.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:39.074 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:39.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:39.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:43:39.074 00:43:39.074 --- 10.0.0.1 ping statistics --- 00:43:39.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:39.074 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:39.074 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3984779 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3984779 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3984779 ']' 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:39.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:39.332 12:46:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:39.332 [2024-12-10 12:46:45.986618] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:39.332 [2024-12-10 12:46:45.988627] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:43:39.332 [2024-12-10 12:46:45.988710] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:39.332 [2024-12-10 12:46:46.107227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:39.590 [2024-12-10 12:46:46.214380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:39.590 [2024-12-10 12:46:46.214422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:39.590 [2024-12-10 12:46:46.214435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:39.590 [2024-12-10 12:46:46.214444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:39.590 [2024-12-10 12:46:46.214458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:39.590 [2024-12-10 12:46:46.216464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:39.590 [2024-12-10 12:46:46.216477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:39.847 [2024-12-10 12:46:46.539960] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:39.847 [2024-12-10 12:46:46.540649] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:39.847 [2024-12-10 12:46:46.540874] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:40.104 5000+0 records in 00:43:40.104 5000+0 records out 00:43:40.104 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184564 s, 555 MB/s 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 AIO0 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 [2024-12-10 12:46:46.881236] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 [2024-12-10 12:46:46.909467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3984779 0 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3984779 0 idle 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3984779 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3984779 -w 256 00:43:40.104 12:46:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3984779 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.63 reactor_0' 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3984779 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.63 reactor_0 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3984779 1 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3984779 1 idle 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3984779 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3984779 -w 256 00:43:40.362 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3984788 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.00 reactor_1' 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3984788 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.00 reactor_1 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3985041 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3984779 0 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3984779 0 busy 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3984779 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3984779 -w 256 00:43:40.620 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:40.877 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3984779 root 20 0 20.1t 215040 101376 R 80.0 0.2 0:00.76 reactor_0' 00:43:40.877 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3984779 root 20 0 20.1t 215040 101376 R 80.0 0.2 0:00.76 reactor_0 00:43:40.877 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:40.877 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:40.877 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=80.0 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=80 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3984779 1 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3984779 1 busy 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3984779 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3984779 -w 256 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3984788 root 20 0 20.1t 218880 101376 R 93.8 0.2 0:00.24 reactor_1' 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3984788 root 20 0 20.1t 218880 101376 R 93.8 0.2 0:00.24 reactor_1 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:40.878 12:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3985041 00:43:50.837 Initializing NVMe Controllers 00:43:50.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:50.837 Controller IO queue size 256, less than required. 00:43:50.837 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:50.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:50.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:50.837 Initialization complete. Launching workers. 00:43:50.837 ======================================================== 00:43:50.837 Latency(us) 00:43:50.837 Device Information : IOPS MiB/s Average min max 00:43:50.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15064.11 58.84 17003.87 5275.05 22671.66 00:43:50.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14927.01 58.31 17158.86 5361.63 22089.83 00:43:50.837 ======================================================== 00:43:50.837 Total : 29991.13 117.15 17081.01 5275.05 22671.66 00:43:50.837 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3984779 0 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3984779 0 idle 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3984779 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3984779 -w 256 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3984779 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:20.62 reactor_0' 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3984779 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:20.62 reactor_0 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3984779 1 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3984779 1 idle 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3984779 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:50.837 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:51.094 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3984779 -w 256 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3984788 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:10.00 reactor_1' 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3984788 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:10.00 reactor_1 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:51.095 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:51.660 12:46:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:43:51.660 12:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:43:51.660 12:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:51.660 12:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:51.660 12:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3984779 0 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3984779 0 idle 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3984779 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.559 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3984779 -w 256 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3984779 root 20 0 20.1t 274176 119808 S 0.0 0.3 0:21.02 reactor_0' 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3984779 root 20 0 20.1t 274176 119808 S 0.0 0.3 0:21.02 reactor_0 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3984779 1 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3984779 1 idle 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3984779 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.816 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.817 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3984779 -w 256 00:43:53.817 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:54.073 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3984788 root 20 0 20.1t 274176 119808 S 0.0 0.3 0:10.17 reactor_1' 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3984788 root 20 0 20.1t 274176 119808 S 0.0 0.3 0:10.17 reactor_1 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:54.074 12:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:54.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:54.638 rmmod nvme_tcp 00:43:54.638 rmmod nvme_fabrics 00:43:54.638 rmmod nvme_keyring 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3984779 ']' 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3984779 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3984779 ']' 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3984779 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3984779 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3984779' 00:43:54.638 killing process with pid 3984779 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3984779 00:43:54.638 12:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3984779 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:56.008 12:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:57.907 12:47:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:57.907 00:43:57.907 real 0m24.278s 00:43:57.907 user 0m41.678s 00:43:57.907 sys 0m8.231s 00:43:57.907 12:47:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:57.907 12:47:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:57.907 ************************************ 00:43:57.907 END TEST nvmf_interrupt 00:43:57.907 ************************************ 00:43:57.907 00:43:57.907 real 37m13.024s 00:43:57.907 user 92m11.857s 00:43:57.907 sys 9m40.209s 00:43:57.907 12:47:04 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:57.907 12:47:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:57.907 ************************************ 00:43:57.907 END TEST nvmf_tcp 00:43:57.907 ************************************ 00:43:58.165 12:47:04 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:43:58.165 12:47:04 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:58.165 12:47:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:58.165 12:47:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:58.165 12:47:04 -- common/autotest_common.sh@10 -- # set +x 00:43:58.165 ************************************ 00:43:58.165 START TEST spdkcli_nvmf_tcp 00:43:58.165 ************************************ 00:43:58.165 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:58.165 * Looking for test storage... 00:43:58.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:43:58.165 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:58.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.166 --rc genhtml_branch_coverage=1 00:43:58.166 --rc genhtml_function_coverage=1 00:43:58.166 --rc genhtml_legend=1 00:43:58.166 --rc geninfo_all_blocks=1 00:43:58.166 --rc geninfo_unexecuted_blocks=1 00:43:58.166 00:43:58.166 ' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:58.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.166 --rc genhtml_branch_coverage=1 00:43:58.166 --rc genhtml_function_coverage=1 00:43:58.166 --rc genhtml_legend=1 00:43:58.166 --rc geninfo_all_blocks=1 00:43:58.166 --rc geninfo_unexecuted_blocks=1 00:43:58.166 00:43:58.166 ' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:58.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.166 --rc genhtml_branch_coverage=1 00:43:58.166 --rc genhtml_function_coverage=1 00:43:58.166 --rc genhtml_legend=1 00:43:58.166 --rc geninfo_all_blocks=1 00:43:58.166 --rc geninfo_unexecuted_blocks=1 00:43:58.166 00:43:58.166 ' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:58.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.166 --rc genhtml_branch_coverage=1 00:43:58.166 --rc genhtml_function_coverage=1 00:43:58.166 --rc genhtml_legend=1 00:43:58.166 --rc geninfo_all_blocks=1 00:43:58.166 --rc geninfo_unexecuted_blocks=1 00:43:58.166 00:43:58.166 ' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:58.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3987925 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3987925 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3987925 ']' 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:58.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:58.166 12:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:58.433 [2024-12-10 12:47:05.041907] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:43:58.433 [2024-12-10 12:47:05.041998] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987925 ] 00:43:58.433 [2024-12-10 12:47:05.153535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:58.728 [2024-12-10 12:47:05.267154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:58.728 [2024-12-10 12:47:05.267160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:59.042 12:47:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:59.042 12:47:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:43:59.042 12:47:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:43:59.042 12:47:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:59.042 12:47:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.299 12:47:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:43:59.299 12:47:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:43:59.299 12:47:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:43:59.299 12:47:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:59.299 12:47:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.299 12:47:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:43:59.299 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:43:59.299 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:43:59.299 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:43:59.299 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:43:59.299 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:43:59.299 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:43:59.299 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:59.299 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:59.299 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:43:59.299 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:43:59.299 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:43:59.299 ' 00:44:01.821 [2024-12-10 12:47:08.500802] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:03.190 [2024-12-10 12:47:09.732950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:05.709 [2024-12-10 12:47:12.064365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:07.604 [2024-12-10 12:47:14.070654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:08.973 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:08.973 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:08.973 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:08.973 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:08.973 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:08.973 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:08.973 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:08.973 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:08.973 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:08.973 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:08.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:08.973 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:08.973 12:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:08.973 12:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:08.973 12:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:08.973 12:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:08.973 12:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:08.973 12:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:08.973 12:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:08.973 12:47:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:09.538 12:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:09.538 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:09.538 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:09.538 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:09.538 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:09.538 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:09.538 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:09.538 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:09.538 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:09.538 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:09.538 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:09.538 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:09.538 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:09.538 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:09.538 ' 00:44:16.088 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:16.088 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:16.088 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:16.088 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:16.088 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:16.088 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:16.088 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:16.088 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:16.088 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:16.088 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:16.088 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:16.088 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:16.088 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:16.088 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3987925 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3987925 ']' 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3987925 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3987925 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3987925' 00:44:16.088 killing process with pid 3987925 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3987925 00:44:16.088 12:47:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3987925 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3987925 ']' 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3987925 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3987925 ']' 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3987925 00:44:16.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3987925) - No such process 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3987925 is not found' 00:44:16.346 Process with pid 3987925 is not found 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:16.346 00:44:16.346 real 0m18.308s 00:44:16.346 user 0m37.875s 00:44:16.346 sys 0m0.870s 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:16.346 12:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:16.346 ************************************ 00:44:16.346 END TEST spdkcli_nvmf_tcp 00:44:16.346 ************************************ 00:44:16.346 12:47:23 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:16.346 12:47:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:16.346 12:47:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:16.346 12:47:23 -- common/autotest_common.sh@10 -- # set +x 00:44:16.346 ************************************ 00:44:16.346 START TEST nvmf_identify_passthru 00:44:16.346 ************************************ 00:44:16.347 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:16.605 * Looking for test storage... 00:44:16.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:16.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.605 --rc genhtml_branch_coverage=1 00:44:16.605 --rc genhtml_function_coverage=1 00:44:16.605 --rc genhtml_legend=1 00:44:16.605 --rc geninfo_all_blocks=1 00:44:16.605 --rc geninfo_unexecuted_blocks=1 00:44:16.605 00:44:16.605 ' 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:16.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.605 --rc genhtml_branch_coverage=1 00:44:16.605 --rc genhtml_function_coverage=1 00:44:16.605 --rc genhtml_legend=1 00:44:16.605 --rc geninfo_all_blocks=1 00:44:16.605 --rc geninfo_unexecuted_blocks=1 00:44:16.605 00:44:16.605 ' 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:16.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.605 --rc genhtml_branch_coverage=1 00:44:16.605 --rc genhtml_function_coverage=1 00:44:16.605 --rc genhtml_legend=1 00:44:16.605 --rc geninfo_all_blocks=1 00:44:16.605 --rc geninfo_unexecuted_blocks=1 00:44:16.605 00:44:16.605 ' 00:44:16.605 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:16.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.605 --rc genhtml_branch_coverage=1 00:44:16.605 --rc genhtml_function_coverage=1 00:44:16.605 --rc genhtml_legend=1 00:44:16.605 --rc geninfo_all_blocks=1 00:44:16.605 --rc geninfo_unexecuted_blocks=1 00:44:16.605 00:44:16.605 ' 00:44:16.605 12:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:16.605 12:47:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.605 12:47:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.605 12:47:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.605 12:47:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:16.605 12:47:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:16.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:16.605 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:16.605 12:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:16.605 12:47:23 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:16.605 12:47:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.605 12:47:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.606 12:47:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.606 12:47:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:16.606 12:47:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.606 12:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:16.606 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:16.606 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:16.606 12:47:23 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:16.606 12:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:21.870 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:21.870 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:21.870 Found net devices under 0000:af:00.0: cvl_0_0 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:21.870 Found net devices under 0000:af:00.1: cvl_0_1 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:21.870 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:22.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:22.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:44:22.128 00:44:22.128 --- 10.0.0.2 ping statistics --- 00:44:22.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:22.128 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:22.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:22.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:44:22.128 00:44:22.128 --- 10.0.0.1 ping statistics --- 00:44:22.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:22.128 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:44:22.128 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:22.129 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:22.129 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:22.129 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:22.129 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:22.129 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:22.129 12:47:28 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:22.129 12:47:28 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:22.129 12:47:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:22.129 12:47:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:22.387 12:47:29 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:44:22.387 12:47:29 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:44:22.387 12:47:29 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:44:22.387 12:47:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:44:22.387 12:47:29 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:44:22.387 12:47:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:44:22.387 12:47:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:22.387 12:47:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:26.564 12:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:44:26.564 12:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:44:26.564 12:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:26.564 12:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:30.755 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:44:30.755 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.755 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.755 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3995230 00:44:30.755 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:30.755 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:30.755 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3995230 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3995230 ']' 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:30.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:30.755 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.755 [2024-12-10 12:47:37.559991] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:44:30.755 [2024-12-10 12:47:37.560082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:31.013 [2024-12-10 12:47:37.677571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:31.013 [2024-12-10 12:47:37.782497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:31.013 [2024-12-10 12:47:37.782543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:31.013 [2024-12-10 12:47:37.782553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:31.013 [2024-12-10 12:47:37.782563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:31.013 [2024-12-10 12:47:37.782573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:31.013 [2024-12-10 12:47:37.784892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:31.013 [2024-12-10 12:47:37.784967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:31.013 [2024-12-10 12:47:37.785029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.013 [2024-12-10 12:47:37.785039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:31.580 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:31.580 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:44:31.580 12:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:31.580 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.580 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:31.580 INFO: Log level set to 20 00:44:31.580 INFO: Requests: 00:44:31.580 { 00:44:31.580 "jsonrpc": "2.0", 00:44:31.580 "method": "nvmf_set_config", 00:44:31.580 "id": 1, 00:44:31.580 "params": { 00:44:31.580 "admin_cmd_passthru": { 00:44:31.580 "identify_ctrlr": true 00:44:31.580 } 00:44:31.580 } 00:44:31.580 } 00:44:31.580 00:44:31.580 INFO: response: 00:44:31.580 { 00:44:31.580 "jsonrpc": "2.0", 00:44:31.580 "id": 1, 00:44:31.580 "result": true 00:44:31.580 } 00:44:31.580 00:44:31.580 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.580 12:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:31.580 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.580 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:31.580 INFO: Setting log level to 20 00:44:31.580 INFO: Setting log level to 20 00:44:31.580 INFO: Log level set to 20 00:44:31.580 INFO: Log level set to 20 00:44:31.580 INFO: Requests: 00:44:31.580 { 00:44:31.580 "jsonrpc": "2.0", 00:44:31.580 "method": "framework_start_init", 00:44:31.580 "id": 1 00:44:31.580 } 00:44:31.580 00:44:31.580 INFO: Requests: 00:44:31.580 { 00:44:31.580 "jsonrpc": "2.0", 00:44:31.580 "method": "framework_start_init", 00:44:31.580 "id": 1 00:44:31.580 } 00:44:31.580 00:44:32.147 [2024-12-10 12:47:38.701704] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:32.147 INFO: response: 00:44:32.147 { 00:44:32.147 "jsonrpc": "2.0", 00:44:32.147 "id": 1, 00:44:32.147 "result": true 00:44:32.147 } 00:44:32.147 00:44:32.147 INFO: response: 00:44:32.147 { 00:44:32.147 "jsonrpc": "2.0", 00:44:32.147 "id": 1, 00:44:32.147 "result": true 00:44:32.147 } 00:44:32.147 00:44:32.147 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.147 12:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:32.147 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.147 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.147 INFO: Setting log level to 40 00:44:32.147 INFO: Setting log level to 40 00:44:32.147 INFO: Setting log level to 40 00:44:32.147 [2024-12-10 12:47:38.717879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:32.147 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.147 12:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:32.147 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:32.147 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.147 12:47:38 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:44:32.147 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.147 12:47:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.427 Nvme0n1 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.427 [2024-12-10 12:47:41.687739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.427 [ 00:44:35.427 { 00:44:35.427 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:35.427 "subtype": "Discovery", 00:44:35.427 "listen_addresses": [], 00:44:35.427 "allow_any_host": true, 00:44:35.427 "hosts": [] 00:44:35.427 }, 00:44:35.427 { 00:44:35.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:35.427 "subtype": "NVMe", 00:44:35.427 "listen_addresses": [ 00:44:35.427 { 00:44:35.427 "trtype": "TCP", 00:44:35.427 "adrfam": "IPv4", 00:44:35.427 "traddr": "10.0.0.2", 00:44:35.427 "trsvcid": "4420" 00:44:35.427 } 00:44:35.427 ], 00:44:35.427 "allow_any_host": true, 00:44:35.427 "hosts": [], 00:44:35.427 "serial_number": "SPDK00000000000001", 00:44:35.427 "model_number": "SPDK bdev Controller", 00:44:35.427 "max_namespaces": 1, 00:44:35.427 "min_cntlid": 1, 00:44:35.427 "max_cntlid": 65519, 00:44:35.427 "namespaces": [ 00:44:35.427 { 00:44:35.427 "nsid": 1, 00:44:35.427 "bdev_name": "Nvme0n1", 00:44:35.427 "name": "Nvme0n1", 00:44:35.427 "nguid": "E7A45FB3F1C84F7C8B26EB563F1E0A64", 00:44:35.427 "uuid": "e7a45fb3-f1c8-4f7c-8b26-eb563f1e0a64" 00:44:35.427 } 00:44:35.427 ] 00:44:35.427 } 00:44:35.427 ] 00:44:35.427 12:47:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:35.427 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:35.427 12:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:44:35.428 12:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:44:35.428 12:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:44:35.428 12:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:35.428 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.428 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.428 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.428 12:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:35.428 12:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:35.428 rmmod nvme_tcp 00:44:35.428 rmmod nvme_fabrics 00:44:35.428 rmmod nvme_keyring 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3995230 ']' 00:44:35.428 12:47:42 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3995230 00:44:35.428 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3995230 ']' 00:44:35.428 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3995230 00:44:35.428 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:44:35.428 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:35.428 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3995230 00:44:35.686 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:35.686 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:35.686 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3995230' 00:44:35.686 killing process with pid 3995230 00:44:35.686 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3995230 00:44:35.686 12:47:42 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3995230 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:38.215 12:47:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:38.215 12:47:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:38.215 12:47:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:40.129 12:47:46 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:40.129 00:44:40.129 real 0m23.709s 00:44:40.129 user 0m33.604s 00:44:40.129 sys 0m6.254s 00:44:40.129 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:40.129 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:40.129 ************************************ 00:44:40.129 END TEST nvmf_identify_passthru 00:44:40.129 ************************************ 00:44:40.129 12:47:46 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:40.129 12:47:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:40.129 12:47:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:40.129 12:47:46 -- common/autotest_common.sh@10 -- # set +x 00:44:40.129 ************************************ 00:44:40.129 START TEST nvmf_dif 00:44:40.129 ************************************ 00:44:40.129 12:47:46 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:40.389 * Looking for test storage... 00:44:40.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:40.389 --rc genhtml_branch_coverage=1 00:44:40.389 --rc genhtml_function_coverage=1 00:44:40.389 --rc genhtml_legend=1 00:44:40.389 --rc geninfo_all_blocks=1 00:44:40.389 --rc geninfo_unexecuted_blocks=1 00:44:40.389 00:44:40.389 ' 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:40.389 --rc genhtml_branch_coverage=1 00:44:40.389 --rc genhtml_function_coverage=1 00:44:40.389 --rc genhtml_legend=1 00:44:40.389 --rc geninfo_all_blocks=1 00:44:40.389 --rc geninfo_unexecuted_blocks=1 00:44:40.389 00:44:40.389 ' 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:40.389 --rc genhtml_branch_coverage=1 00:44:40.389 --rc genhtml_function_coverage=1 00:44:40.389 --rc genhtml_legend=1 00:44:40.389 --rc geninfo_all_blocks=1 00:44:40.389 --rc geninfo_unexecuted_blocks=1 00:44:40.389 00:44:40.389 ' 00:44:40.389 12:47:47 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:40.389 --rc genhtml_branch_coverage=1 00:44:40.389 --rc genhtml_function_coverage=1 00:44:40.389 --rc genhtml_legend=1 00:44:40.389 --rc geninfo_all_blocks=1 00:44:40.389 --rc geninfo_unexecuted_blocks=1 00:44:40.389 00:44:40.389 ' 00:44:40.389 12:47:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:40.389 12:47:47 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:40.389 12:47:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.389 12:47:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.389 12:47:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.389 12:47:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:40.389 12:47:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:40.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:40.389 12:47:47 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:40.389 12:47:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:40.389 12:47:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:40.389 12:47:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:40.389 12:47:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:40.390 12:47:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:40.390 12:47:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:40.390 12:47:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:40.390 12:47:47 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:40.390 12:47:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:45.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:45.656 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:45.656 Found net devices under 0000:af:00.0: cvl_0_0 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:45.656 Found net devices under 0000:af:00.1: cvl_0_1 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:45.656 12:47:52 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:45.657 12:47:52 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:45.657 12:47:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:45.657 12:47:52 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:45.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:45.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:44:45.657 00:44:45.657 --- 10.0.0.2 ping statistics --- 00:44:45.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.657 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:44:45.657 12:47:52 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:45.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:45.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:44:45.657 00:44:45.657 --- 10.0.0.1 ping statistics --- 00:44:45.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.657 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:44:45.657 12:47:52 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:45.657 12:47:52 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:44:45.657 12:47:52 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:45.657 12:47:52 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:48.183 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:44:48.183 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:44:48.183 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:44:48.183 12:47:54 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:48.183 12:47:54 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:48.183 12:47:54 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:48.183 12:47:54 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:48.183 12:47:54 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:48.183 12:47:54 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:48.183 12:47:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:48.184 12:47:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:48.184 12:47:54 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:48.184 12:47:54 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:48.184 12:47:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:48.184 12:47:54 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=4000811 00:44:48.184 12:47:54 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 4000811 00:44:48.184 12:47:54 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:48.184 12:47:54 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 4000811 ']' 00:44:48.184 12:47:54 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:48.184 12:47:54 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:48.184 12:47:54 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:48.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:48.184 12:47:54 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:48.184 12:47:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:48.184 [2024-12-10 12:47:54.839615] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:44:48.184 [2024-12-10 12:47:54.839705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:48.184 [2024-12-10 12:47:54.955793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.440 [2024-12-10 12:47:55.060338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:48.440 [2024-12-10 12:47:55.060380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:48.440 [2024-12-10 12:47:55.060390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:48.440 [2024-12-10 12:47:55.060400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:48.440 [2024-12-10 12:47:55.060408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:48.440 [2024-12-10 12:47:55.061773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:44:49.004 12:47:55 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:49.004 12:47:55 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:49.004 12:47:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:49.004 12:47:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:49.004 [2024-12-10 12:47:55.676338] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.004 12:47:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:49.004 12:47:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:49.004 ************************************ 00:44:49.004 START TEST fio_dif_1_default 00:44:49.004 ************************************ 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:49.004 bdev_null0 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:49.004 [2024-12-10 12:47:55.752685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:49.004 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:49.005 { 00:44:49.005 "params": { 00:44:49.005 "name": "Nvme$subsystem", 00:44:49.005 "trtype": "$TEST_TRANSPORT", 00:44:49.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:49.005 "adrfam": "ipv4", 00:44:49.005 "trsvcid": "$NVMF_PORT", 00:44:49.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:49.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:49.005 "hdgst": ${hdgst:-false}, 00:44:49.005 "ddgst": ${ddgst:-false} 00:44:49.005 }, 00:44:49.005 "method": "bdev_nvme_attach_controller" 00:44:49.005 } 00:44:49.005 EOF 00:44:49.005 )") 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:49.005 "params": { 00:44:49.005 "name": "Nvme0", 00:44:49.005 "trtype": "tcp", 00:44:49.005 "traddr": "10.0.0.2", 00:44:49.005 "adrfam": "ipv4", 00:44:49.005 "trsvcid": "4420", 00:44:49.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:49.005 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:49.005 "hdgst": false, 00:44:49.005 "ddgst": false 00:44:49.005 }, 00:44:49.005 "method": "bdev_nvme_attach_controller" 00:44:49.005 }' 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:49.005 12:47:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:49.594 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:49.594 fio-3.35 00:44:49.594 Starting 1 thread 00:45:01.802 00:45:01.802 filename0: (groupid=0, jobs=1): err= 0: pid=4001271: Tue Dec 10 12:48:07 2024 00:45:01.802 read: IOPS=193, BW=773KiB/s (792kB/s)(7744KiB/10013msec) 00:45:01.802 slat (nsec): min=6811, max=39331, avg=8259.20, stdev=2166.67 00:45:01.802 clat (usec): min=442, max=42922, avg=20661.74, stdev=20487.80 00:45:01.802 lat (usec): min=449, max=42961, avg=20670.00, stdev=20487.39 00:45:01.802 clat percentiles (usec): 00:45:01.802 | 1.00th=[ 457], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 490], 00:45:01.802 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[ 660], 60.00th=[41157], 00:45:01.802 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:45:01.802 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:45:01.802 | 99.99th=[42730] 00:45:01.802 bw ( KiB/s): min= 704, max= 832, per=99.82%, avg=772.80, stdev=33.28, samples=20 00:45:01.802 iops : min= 176, max= 208, avg=193.20, stdev= 8.32, samples=20 00:45:01.802 lat (usec) : 500=31.20%, 750=19.63% 00:45:01.802 lat (msec) : 50=49.17% 00:45:01.802 cpu : usr=93.90%, sys=5.77%, ctx=13, majf=0, minf=1634 00:45:01.802 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:01.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:01.802 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:01.802 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:01.802 00:45:01.802 Run status group 0 (all jobs): 00:45:01.802 READ: bw=773KiB/s (792kB/s), 773KiB/s-773KiB/s (792kB/s-792kB/s), io=7744KiB (7930kB), run=10013-10013msec 00:45:01.802 ----------------------------------------------------- 00:45:01.802 Suppressions used: 00:45:01.802 count bytes template 00:45:01.802 1 8 /usr/src/fio/parse.c 00:45:01.802 1 8 libtcmalloc_minimal.so 00:45:01.802 1 904 libcrypto.so 00:45:01.802 ----------------------------------------------------- 00:45:01.802 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.802 00:45:01.802 real 0m12.494s 00:45:01.802 user 0m17.682s 00:45:01.802 sys 0m1.099s 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:01.802 ************************************ 00:45:01.802 END TEST fio_dif_1_default 00:45:01.802 ************************************ 00:45:01.802 12:48:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:01.802 12:48:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:01.802 12:48:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:01.802 12:48:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:01.802 ************************************ 00:45:01.802 START TEST fio_dif_1_multi_subsystems 00:45:01.802 ************************************ 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:01.802 bdev_null0 00:45:01.802 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:01.803 [2024-12-10 12:48:08.312831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:01.803 bdev_null1 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:01.803 { 00:45:01.803 "params": { 00:45:01.803 "name": "Nvme$subsystem", 00:45:01.803 "trtype": "$TEST_TRANSPORT", 00:45:01.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:01.803 "adrfam": "ipv4", 00:45:01.803 "trsvcid": "$NVMF_PORT", 00:45:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:01.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:01.803 "hdgst": ${hdgst:-false}, 00:45:01.803 "ddgst": ${ddgst:-false} 00:45:01.803 }, 00:45:01.803 "method": "bdev_nvme_attach_controller" 00:45:01.803 } 00:45:01.803 EOF 00:45:01.803 )") 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:01.803 { 00:45:01.803 "params": { 00:45:01.803 "name": "Nvme$subsystem", 00:45:01.803 "trtype": "$TEST_TRANSPORT", 00:45:01.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:01.803 "adrfam": "ipv4", 00:45:01.803 "trsvcid": "$NVMF_PORT", 00:45:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:01.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:01.803 "hdgst": ${hdgst:-false}, 00:45:01.803 "ddgst": ${ddgst:-false} 00:45:01.803 }, 00:45:01.803 "method": "bdev_nvme_attach_controller" 00:45:01.803 } 00:45:01.803 EOF 00:45:01.803 )") 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:01.803 "params": { 00:45:01.803 "name": "Nvme0", 00:45:01.803 "trtype": "tcp", 00:45:01.803 "traddr": "10.0.0.2", 00:45:01.803 "adrfam": "ipv4", 00:45:01.803 "trsvcid": "4420", 00:45:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:01.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:01.803 "hdgst": false, 00:45:01.803 "ddgst": false 00:45:01.803 }, 00:45:01.803 "method": "bdev_nvme_attach_controller" 00:45:01.803 },{ 00:45:01.803 "params": { 00:45:01.803 "name": "Nvme1", 00:45:01.803 "trtype": "tcp", 00:45:01.803 "traddr": "10.0.0.2", 00:45:01.803 "adrfam": "ipv4", 00:45:01.803 "trsvcid": "4420", 00:45:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:01.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:01.803 "hdgst": false, 00:45:01.803 "ddgst": false 00:45:01.803 }, 00:45:01.803 "method": "bdev_nvme_attach_controller" 00:45:01.803 }' 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:01.803 12:48:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:02.062 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:02.062 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:02.062 fio-3.35 00:45:02.062 Starting 2 threads 00:45:14.342 00:45:14.342 filename0: (groupid=0, jobs=1): err= 0: pid=4003830: Tue Dec 10 12:48:19 2024 00:45:14.342 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10009msec) 00:45:14.342 slat (nsec): min=6917, max=33419, avg=8659.46, stdev=2631.50 00:45:14.342 clat (usec): min=462, max=43911, avg=21180.12, stdev=20471.50 00:45:14.342 lat (usec): min=469, max=43944, avg=21188.78, stdev=20470.84 00:45:14.342 clat percentiles (usec): 00:45:14.342 | 1.00th=[ 469], 5.00th=[ 478], 10.00th=[ 482], 20.00th=[ 494], 00:45:14.342 | 30.00th=[ 502], 40.00th=[ 627], 50.00th=[41157], 60.00th=[41157], 00:45:14.342 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:45:14.342 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:45:14.342 | 99.99th=[43779] 00:45:14.342 bw ( KiB/s): min= 672, max= 768, per=65.79%, avg=753.60, stdev=30.22, samples=20 00:45:14.342 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:45:14.342 lat (usec) : 500=28.07%, 750=20.34%, 1000=1.17% 00:45:14.342 lat (msec) : 50=50.42% 00:45:14.342 cpu : usr=96.97%, sys=2.76%, ctx=13, majf=0, minf=1634 00:45:14.342 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:14.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:14.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:14.342 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:14.342 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:14.342 filename1: (groupid=0, jobs=1): err= 0: pid=4003831: Tue Dec 10 12:48:19 2024 00:45:14.342 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10005msec) 00:45:14.342 slat (nsec): min=6988, max=31937, avg=9575.30, stdev=3385.74 00:45:14.342 clat (usec): min=686, max=45790, avg=40974.46, stdev=2640.37 00:45:14.342 lat (usec): min=693, max=45822, avg=40984.04, stdev=2640.59 00:45:14.342 clat percentiles (usec): 00:45:14.342 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:45:14.342 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:45:14.342 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:45:14.342 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:45:14.342 | 99.99th=[45876] 00:45:14.342 bw ( KiB/s): min= 352, max= 416, per=34.07%, avg=390.74, stdev=17.13, samples=19 00:45:14.342 iops : min= 88, max= 104, avg=97.68, stdev= 4.28, samples=19 00:45:14.342 lat (usec) : 750=0.41% 00:45:14.343 lat (msec) : 50=99.59% 00:45:14.343 cpu : usr=96.78%, sys=2.93%, ctx=14, majf=0, minf=1635 00:45:14.343 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:14.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:14.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:14.343 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:14.343 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:14.343 00:45:14.343 Run status group 0 (all jobs): 00:45:14.343 READ: bw=1145KiB/s (1172kB/s), 390KiB/s-755KiB/s (400kB/s-773kB/s), io=11.2MiB (11.7MB), run=10005-10009msec 00:45:14.343 ----------------------------------------------------- 00:45:14.343 Suppressions used: 00:45:14.343 count bytes template 00:45:14.343 2 16 /usr/src/fio/parse.c 00:45:14.343 1 8 libtcmalloc_minimal.so 00:45:14.343 1 904 libcrypto.so 00:45:14.343 ----------------------------------------------------- 00:45:14.343 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.343 00:45:14.343 real 0m12.860s 00:45:14.343 user 0m27.541s 00:45:14.343 sys 0m1.091s 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:14.343 12:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:14.343 ************************************ 00:45:14.343 END TEST fio_dif_1_multi_subsystems 00:45:14.343 ************************************ 00:45:14.622 12:48:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:14.622 12:48:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:14.622 12:48:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:14.622 12:48:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:14.622 ************************************ 00:45:14.622 START TEST fio_dif_rand_params 00:45:14.622 ************************************ 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:14.622 bdev_null0 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:14.622 [2024-12-10 12:48:21.255140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:14.622 { 00:45:14.622 "params": { 00:45:14.622 "name": "Nvme$subsystem", 00:45:14.622 "trtype": "$TEST_TRANSPORT", 00:45:14.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:14.622 "adrfam": "ipv4", 00:45:14.622 "trsvcid": "$NVMF_PORT", 00:45:14.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:14.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:14.622 "hdgst": ${hdgst:-false}, 00:45:14.622 "ddgst": ${ddgst:-false} 00:45:14.622 }, 00:45:14.622 "method": "bdev_nvme_attach_controller" 00:45:14.622 } 00:45:14.622 EOF 00:45:14.622 )") 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:14.622 "params": { 00:45:14.622 "name": "Nvme0", 00:45:14.622 "trtype": "tcp", 00:45:14.622 "traddr": "10.0.0.2", 00:45:14.622 "adrfam": "ipv4", 00:45:14.622 "trsvcid": "4420", 00:45:14.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:14.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:14.622 "hdgst": false, 00:45:14.622 "ddgst": false 00:45:14.622 }, 00:45:14.622 "method": "bdev_nvme_attach_controller" 00:45:14.622 }' 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:14.622 12:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:14.882 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:14.882 ... 00:45:14.882 fio-3.35 00:45:14.882 Starting 3 threads 00:45:21.441 00:45:21.441 filename0: (groupid=0, jobs=1): err= 0: pid=4005958: Tue Dec 10 12:48:27 2024 00:45:21.441 read: IOPS=288, BW=36.1MiB/s (37.8MB/s)(181MiB/5007msec) 00:45:21.441 slat (nsec): min=7306, max=31899, avg=13603.32, stdev=1731.22 00:45:21.441 clat (usec): min=4261, max=51482, avg=10374.74, stdev=3441.60 00:45:21.441 lat (usec): min=4273, max=51498, avg=10388.35, stdev=3441.62 00:45:21.441 clat percentiles (usec): 00:45:21.441 | 1.00th=[ 4686], 5.00th=[ 7242], 10.00th=[ 8094], 20.00th=[ 9110], 00:45:21.441 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:45:21.441 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11994], 95.00th=[12780], 00:45:21.441 | 99.00th=[14746], 99.50th=[44827], 99.90th=[50594], 99.95th=[51643], 00:45:21.441 | 99.99th=[51643] 00:45:21.441 bw ( KiB/s): min=31232, max=42240, per=36.94%, avg=36940.80, stdev=2783.62, samples=10 00:45:21.441 iops : min= 244, max= 330, avg=288.60, stdev=21.75, samples=10 00:45:21.441 lat (msec) : 10=42.70%, 20=56.68%, 50=0.42%, 100=0.21% 00:45:21.441 cpu : usr=93.53%, sys=6.07%, ctx=8, majf=0, minf=1632 00:45:21.441 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:21.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.441 issued rwts: total=1445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.441 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:21.441 filename0: (groupid=0, jobs=1): err= 0: pid=4005959: Tue Dec 10 12:48:27 2024 00:45:21.441 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(152MiB/5005msec) 00:45:21.441 slat (nsec): min=7450, max=30439, avg=14017.84, stdev=1702.04 00:45:21.441 clat (usec): min=4293, max=55087, avg=12297.56, stdev=4852.08 00:45:21.441 lat (usec): min=4305, max=55103, avg=12311.58, stdev=4852.12 00:45:21.441 clat percentiles (usec): 00:45:21.441 | 1.00th=[ 6849], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10159], 00:45:21.441 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:45:21.441 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14353], 95.00th=[14746], 00:45:21.441 | 99.00th=[49546], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:45:21.441 | 99.99th=[55313] 00:45:21.441 bw ( KiB/s): min=23599, max=33536, per=31.16%, avg=31159.90, stdev=2831.76, samples=10 00:45:21.441 iops : min= 184, max= 262, avg=243.40, stdev=22.23, samples=10 00:45:21.441 lat (msec) : 10=17.72%, 20=81.05%, 50=0.33%, 100=0.90% 00:45:21.441 cpu : usr=94.00%, sys=5.64%, ctx=7, majf=0, minf=1636 00:45:21.441 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:21.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.441 issued rwts: total=1219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.441 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:21.441 filename0: (groupid=0, jobs=1): err= 0: pid=4005960: Tue Dec 10 12:48:27 2024 00:45:21.441 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5005msec) 00:45:21.441 slat (nsec): min=7465, max=35577, avg=13568.33, stdev=1732.40 00:45:21.441 clat (usec): min=6117, max=53333, avg=12012.66, stdev=5693.08 00:45:21.441 lat (usec): min=6128, max=53348, avg=12026.23, stdev=5693.08 00:45:21.441 clat percentiles (usec): 00:45:21.441 | 1.00th=[ 6783], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[ 9896], 00:45:21.441 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:45:21.441 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13566], 95.00th=[14353], 00:45:21.441 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:45:21.441 | 99.99th=[53216] 00:45:21.441 bw ( KiB/s): min=26059, max=35328, per=32.10%, avg=32107.89, stdev=2715.24, samples=9 00:45:21.441 iops : min= 203, max= 276, avg=250.78, stdev=21.38, samples=9 00:45:21.441 lat (msec) : 10=21.23%, 20=76.84%, 50=0.56%, 100=1.36% 00:45:21.441 cpu : usr=94.08%, sys=5.54%, ctx=7, majf=0, minf=1631 00:45:21.441 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:21.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.441 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.441 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:21.441 00:45:21.441 Run status group 0 (all jobs): 00:45:21.441 READ: bw=97.7MiB/s (102MB/s), 30.4MiB/s-36.1MiB/s (31.9MB/s-37.8MB/s), io=489MiB (513MB), run=5005-5007msec 00:45:22.011 ----------------------------------------------------- 00:45:22.011 Suppressions used: 00:45:22.011 count bytes template 00:45:22.011 5 44 /usr/src/fio/parse.c 00:45:22.011 1 8 libtcmalloc_minimal.so 00:45:22.011 1 904 libcrypto.so 00:45:22.011 ----------------------------------------------------- 00:45:22.011 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 bdev_null0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 [2024-12-10 12:48:28.650001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 bdev_null1 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 bdev_null2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:22.011 { 00:45:22.011 "params": { 00:45:22.011 "name": "Nvme$subsystem", 00:45:22.011 "trtype": "$TEST_TRANSPORT", 00:45:22.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:22.011 "adrfam": "ipv4", 00:45:22.011 "trsvcid": "$NVMF_PORT", 00:45:22.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:22.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:22.011 "hdgst": ${hdgst:-false}, 00:45:22.011 "ddgst": ${ddgst:-false} 00:45:22.011 }, 00:45:22.011 "method": "bdev_nvme_attach_controller" 00:45:22.011 } 00:45:22.011 EOF 00:45:22.011 )") 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:22.011 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:22.012 { 00:45:22.012 "params": { 00:45:22.012 "name": "Nvme$subsystem", 00:45:22.012 "trtype": "$TEST_TRANSPORT", 00:45:22.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:22.012 "adrfam": "ipv4", 00:45:22.012 "trsvcid": "$NVMF_PORT", 00:45:22.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:22.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:22.012 "hdgst": ${hdgst:-false}, 00:45:22.012 "ddgst": ${ddgst:-false} 00:45:22.012 }, 00:45:22.012 "method": "bdev_nvme_attach_controller" 00:45:22.012 } 00:45:22.012 EOF 00:45:22.012 )") 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:22.012 { 00:45:22.012 "params": { 00:45:22.012 "name": "Nvme$subsystem", 00:45:22.012 "trtype": "$TEST_TRANSPORT", 00:45:22.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:22.012 "adrfam": "ipv4", 00:45:22.012 "trsvcid": "$NVMF_PORT", 00:45:22.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:22.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:22.012 "hdgst": ${hdgst:-false}, 00:45:22.012 "ddgst": ${ddgst:-false} 00:45:22.012 }, 00:45:22.012 "method": "bdev_nvme_attach_controller" 00:45:22.012 } 00:45:22.012 EOF 00:45:22.012 )") 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:22.012 "params": { 00:45:22.012 "name": "Nvme0", 00:45:22.012 "trtype": "tcp", 00:45:22.012 "traddr": "10.0.0.2", 00:45:22.012 "adrfam": "ipv4", 00:45:22.012 "trsvcid": "4420", 00:45:22.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:22.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:22.012 "hdgst": false, 00:45:22.012 "ddgst": false 00:45:22.012 }, 00:45:22.012 "method": "bdev_nvme_attach_controller" 00:45:22.012 },{ 00:45:22.012 "params": { 00:45:22.012 "name": "Nvme1", 00:45:22.012 "trtype": "tcp", 00:45:22.012 "traddr": "10.0.0.2", 00:45:22.012 "adrfam": "ipv4", 00:45:22.012 "trsvcid": "4420", 00:45:22.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:22.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:22.012 "hdgst": false, 00:45:22.012 "ddgst": false 00:45:22.012 }, 00:45:22.012 "method": "bdev_nvme_attach_controller" 00:45:22.012 },{ 00:45:22.012 "params": { 00:45:22.012 "name": "Nvme2", 00:45:22.012 "trtype": "tcp", 00:45:22.012 "traddr": "10.0.0.2", 00:45:22.012 "adrfam": "ipv4", 00:45:22.012 "trsvcid": "4420", 00:45:22.012 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:22.012 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:22.012 "hdgst": false, 00:45:22.012 "ddgst": false 00:45:22.012 }, 00:45:22.012 "method": "bdev_nvme_attach_controller" 00:45:22.012 }' 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:22.012 12:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:22.578 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:22.578 ... 00:45:22.578 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:22.578 ... 00:45:22.578 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:22.578 ... 00:45:22.578 fio-3.35 00:45:22.578 Starting 24 threads 00:45:34.789 00:45:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=4007199: Tue Dec 10 12:48:40 2024 00:45:34.789 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10005msec) 00:45:34.789 slat (usec): min=7, max=122, avg=43.78, stdev=23.50 00:45:34.789 clat (usec): min=26439, max=67521, avg=35932.18, stdev=1985.38 00:45:34.789 lat (usec): min=26451, max=67546, avg=35975.97, stdev=1984.89 00:45:34.789 clat percentiles (usec): 00:45:34.789 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:45:34.789 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.789 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:34.789 | 99.00th=[37487], 99.50th=[37487], 99.90th=[67634], 99.95th=[67634], 00:45:34.789 | 99.99th=[67634] 00:45:34.789 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1758.32, stdev=71.93, samples=19 00:45:34.789 iops : min= 384, max= 448, avg=439.58, stdev=17.98, samples=19 00:45:34.789 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.789 cpu : usr=98.60%, sys=0.96%, ctx=14, majf=0, minf=1634 00:45:34.789 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.789 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=4007200: Tue Dec 10 12:48:40 2024 00:45:34.789 read: IOPS=444, BW=1778KiB/s (1820kB/s)(17.4MiB/10005msec) 00:45:34.789 slat (usec): min=8, max=145, avg=33.07, stdev=24.70 00:45:34.789 clat (usec): min=12602, max=93912, avg=35705.49, stdev=4968.46 00:45:34.789 lat (usec): min=12619, max=93948, avg=35738.56, stdev=4969.19 00:45:34.789 clat percentiles (usec): 00:45:34.789 | 1.00th=[21627], 5.00th=[29230], 10.00th=[33162], 20.00th=[35390], 00:45:34.789 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.789 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[39060], 00:45:34.789 | 99.00th=[49546], 99.50th=[57410], 99.90th=[93848], 99.95th=[93848], 00:45:34.789 | 99.99th=[93848] 00:45:34.789 bw ( KiB/s): min= 1536, max= 1888, per=4.18%, avg=1770.95, stdev=85.01, samples=19 00:45:34.789 iops : min= 384, max= 472, avg=442.74, stdev=21.25, samples=19 00:45:34.789 lat (msec) : 20=0.27%, 50=98.79%, 100=0.94% 00:45:34.789 cpu : usr=98.47%, sys=1.10%, ctx=14, majf=0, minf=1633 00:45:34.789 IO depths : 1=3.8%, 2=7.8%, 4=17.1%, 8=61.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:45:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 complete : 0=0.0%, 4=92.2%, 8=3.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 issued rwts: total=4446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.789 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=4007201: Tue Dec 10 12:48:40 2024 00:45:34.789 read: IOPS=439, BW=1756KiB/s (1798kB/s)(17.2MiB/10021msec) 00:45:34.789 slat (nsec): min=7460, max=95269, avg=25667.19, stdev=7752.23 00:45:34.789 clat (usec): min=21583, max=90395, avg=36204.15, stdev=3418.55 00:45:34.789 lat (usec): min=21594, max=90420, avg=36229.81, stdev=3417.82 00:45:34.789 clat percentiles (usec): 00:45:34.789 | 1.00th=[31327], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:34.789 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.789 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.789 | 99.00th=[37487], 99.50th=[41681], 99.90th=[90702], 99.95th=[90702], 00:45:34.789 | 99.99th=[90702] 00:45:34.789 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1751.58, stdev=73.20, samples=19 00:45:34.789 iops : min= 384, max= 448, avg=437.89, stdev=18.30, samples=19 00:45:34.789 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.789 cpu : usr=98.28%, sys=1.29%, ctx=17, majf=0, minf=1633 00:45:34.789 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:45:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.789 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=4007202: Tue Dec 10 12:48:40 2024 00:45:34.789 read: IOPS=441, BW=1765KiB/s (1808kB/s)(17.2MiB/10006msec) 00:45:34.789 slat (usec): min=4, max=123, avg=38.51, stdev=23.73 00:45:34.789 clat (usec): min=20574, max=48467, avg=35975.59, stdev=1142.09 00:45:34.789 lat (usec): min=20592, max=48576, avg=36014.10, stdev=1139.93 00:45:34.789 clat percentiles (usec): 00:45:34.789 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:45:34.789 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.789 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.789 | 99.00th=[37487], 99.50th=[39060], 99.90th=[41681], 99.95th=[41681], 00:45:34.789 | 99.99th=[48497] 00:45:34.789 bw ( KiB/s): min= 1664, max= 1792, per=4.17%, avg=1765.05, stdev=53.61, samples=19 00:45:34.789 iops : min= 416, max= 448, avg=441.26, stdev=13.40, samples=19 00:45:34.789 lat (msec) : 50=100.00% 00:45:34.789 cpu : usr=98.62%, sys=0.94%, ctx=16, majf=0, minf=1635 00:45:34.789 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.789 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=4007203: Tue Dec 10 12:48:40 2024 00:45:34.789 read: IOPS=439, BW=1760KiB/s (1802kB/s)(17.2MiB/10001msec) 00:45:34.789 slat (usec): min=6, max=123, avg=36.22, stdev=22.41 00:45:34.789 clat (usec): min=20443, max=87125, avg=36106.44, stdev=2707.75 00:45:34.789 lat (usec): min=20453, max=87148, avg=36142.67, stdev=2705.36 00:45:34.789 clat percentiles (usec): 00:45:34.789 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:45:34.789 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.789 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.789 | 99.00th=[37487], 99.50th=[37487], 99.90th=[76022], 99.95th=[76022], 00:45:34.789 | 99.99th=[87557] 00:45:34.789 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1758.32, stdev=57.91, samples=19 00:45:34.789 iops : min= 416, max= 448, avg=439.58, stdev=14.48, samples=19 00:45:34.789 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.789 cpu : usr=98.51%, sys=1.06%, ctx=21, majf=0, minf=1636 00:45:34.789 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.789 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=4007204: Tue Dec 10 12:48:40 2024 00:45:34.789 read: IOPS=439, BW=1757KiB/s (1800kB/s)(17.2MiB/10015msec) 00:45:34.789 slat (nsec): min=4633, max=54153, avg=25814.24, stdev=7507.77 00:45:34.789 clat (usec): min=26110, max=88393, avg=36185.86, stdev=3278.40 00:45:34.789 lat (usec): min=26130, max=88412, avg=36211.68, stdev=3277.39 00:45:34.789 clat percentiles (usec): 00:45:34.789 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:34.789 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.789 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.789 | 99.00th=[36963], 99.50th=[37487], 99.90th=[88605], 99.95th=[88605], 00:45:34.789 | 99.99th=[88605] 00:45:34.789 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1751.58, stdev=74.55, samples=19 00:45:34.789 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:34.789 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.789 cpu : usr=98.28%, sys=1.30%, ctx=16, majf=0, minf=1635 00:45:34.789 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.789 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=4007205: Tue Dec 10 12:48:40 2024 00:45:34.789 read: IOPS=439, BW=1757KiB/s (1799kB/s)(17.2MiB/10019msec) 00:45:34.789 slat (usec): min=4, max=116, avg=43.82, stdev=23.67 00:45:34.789 clat (usec): min=21070, max=88281, avg=35958.28, stdev=3562.62 00:45:34.789 lat (usec): min=21104, max=88297, avg=36002.10, stdev=3561.83 00:45:34.789 clat percentiles (usec): 00:45:34.789 | 1.00th=[26608], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:45:34.789 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.789 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36963], 00:45:34.789 | 99.00th=[44827], 99.50th=[47449], 99.90th=[88605], 99.95th=[88605], 00:45:34.789 | 99.99th=[88605] 00:45:34.789 bw ( KiB/s): min= 1520, max= 1808, per=4.13%, avg=1751.58, stdev=77.72, samples=19 00:45:34.789 iops : min= 380, max= 452, avg=437.89, stdev=19.43, samples=19 00:45:34.789 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.789 cpu : usr=98.43%, sys=1.14%, ctx=15, majf=0, minf=1632 00:45:34.789 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:45:34.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.789 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.789 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.789 filename0: (groupid=0, jobs=1): err= 0: pid=4007206: Tue Dec 10 12:48:40 2024 00:45:34.790 read: IOPS=441, BW=1764KiB/s (1807kB/s)(17.2MiB/10011msec) 00:45:34.790 slat (usec): min=6, max=123, avg=44.72, stdev=23.57 00:45:34.790 clat (usec): min=20652, max=52888, avg=35863.48, stdev=1477.23 00:45:34.790 lat (usec): min=20667, max=52913, avg=35908.20, stdev=1477.10 00:45:34.790 clat percentiles (usec): 00:45:34.790 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:45:34.790 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.790 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.790 | 99.00th=[36963], 99.50th=[37487], 99.90th=[52691], 99.95th=[52691], 00:45:34.790 | 99.99th=[52691] 00:45:34.790 bw ( KiB/s): min= 1664, max= 1792, per=4.17%, avg=1765.05, stdev=53.61, samples=19 00:45:34.790 iops : min= 416, max= 448, avg=441.26, stdev=13.40, samples=19 00:45:34.790 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.790 cpu : usr=98.42%, sys=1.16%, ctx=17, majf=0, minf=1635 00:45:34.790 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.790 filename1: (groupid=0, jobs=1): err= 0: pid=4007207: Tue Dec 10 12:48:40 2024 00:45:34.790 read: IOPS=449, BW=1798KiB/s (1841kB/s)(17.6MiB/10007msec) 00:45:34.790 slat (usec): min=8, max=141, avg=32.62, stdev=21.04 00:45:34.790 clat (usec): min=10741, max=95166, avg=35337.70, stdev=6037.04 00:45:34.790 lat (usec): min=10778, max=95201, avg=35370.32, stdev=6038.31 00:45:34.790 clat percentiles (usec): 00:45:34.790 | 1.00th=[19792], 5.00th=[25297], 10.00th=[28967], 20.00th=[35390], 00:45:34.790 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.790 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36963], 95.00th=[41681], 00:45:34.790 | 99.00th=[56361], 99.50th=[57934], 99.90th=[94897], 99.95th=[94897], 00:45:34.790 | 99.99th=[94897] 00:45:34.790 bw ( KiB/s): min= 1536, max= 2032, per=4.23%, avg=1792.84, stdev=108.84, samples=19 00:45:34.790 iops : min= 384, max= 508, avg=448.21, stdev=27.21, samples=19 00:45:34.790 lat (msec) : 20=1.02%, 50=97.33%, 100=1.65% 00:45:34.790 cpu : usr=98.38%, sys=1.17%, ctx=12, majf=0, minf=1637 00:45:34.790 IO depths : 1=2.8%, 2=6.4%, 4=15.6%, 8=64.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:45:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 complete : 0=0.0%, 4=91.8%, 8=3.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 issued rwts: total=4498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.790 filename1: (groupid=0, jobs=1): err= 0: pid=4007208: Tue Dec 10 12:48:40 2024 00:45:34.790 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10008msec) 00:45:34.790 slat (usec): min=4, max=123, avg=43.04, stdev=23.57 00:45:34.790 clat (usec): min=15037, max=85798, avg=35949.72, stdev=3287.93 00:45:34.790 lat (usec): min=15047, max=85817, avg=35992.77, stdev=3287.25 00:45:34.790 clat percentiles (usec): 00:45:34.790 | 1.00th=[32637], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:45:34.790 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.790 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:34.790 | 99.00th=[37487], 99.50th=[46924], 99.90th=[85459], 99.95th=[85459], 00:45:34.790 | 99.99th=[85459] 00:45:34.790 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1751.58, stdev=74.55, samples=19 00:45:34.790 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:34.790 lat (msec) : 20=0.05%, 50=99.59%, 100=0.36% 00:45:34.790 cpu : usr=98.33%, sys=1.25%, ctx=14, majf=0, minf=1635 00:45:34.790 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.790 filename1: (groupid=0, jobs=1): err= 0: pid=4007209: Tue Dec 10 12:48:40 2024 00:45:34.790 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10001msec) 00:45:34.790 slat (nsec): min=6596, max=53781, avg=23732.22, stdev=7949.40 00:45:34.790 clat (usec): min=4745, max=43041, avg=35645.66, stdev=3355.65 00:45:34.790 lat (usec): min=4761, max=43058, avg=35669.40, stdev=3356.65 00:45:34.790 clat percentiles (usec): 00:45:34.790 | 1.00th=[10945], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:34.790 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.790 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.790 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[41681], 00:45:34.790 | 99.99th=[43254] 00:45:34.790 bw ( KiB/s): min= 1664, max= 2176, per=4.21%, avg=1785.26, stdev=108.56, samples=19 00:45:34.790 iops : min= 416, max= 544, avg=446.32, stdev=27.14, samples=19 00:45:34.790 lat (msec) : 10=0.72%, 20=0.76%, 50=98.52% 00:45:34.790 cpu : usr=98.42%, sys=1.15%, ctx=16, majf=0, minf=1634 00:45:34.790 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.790 filename1: (groupid=0, jobs=1): err= 0: pid=4007210: Tue Dec 10 12:48:40 2024 00:45:34.790 read: IOPS=439, BW=1758KiB/s (1801kB/s)(17.2MiB/10009msec) 00:45:34.790 slat (nsec): min=6593, max=68555, avg=24420.48, stdev=8437.77 00:45:34.790 clat (usec): min=26130, max=82463, avg=36160.93, stdev=2923.79 00:45:34.790 lat (usec): min=26146, max=82488, avg=36185.35, stdev=2923.15 00:45:34.790 clat percentiles (usec): 00:45:34.790 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:34.790 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.790 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.790 | 99.00th=[36963], 99.50th=[37487], 99.90th=[82314], 99.95th=[82314], 00:45:34.790 | 99.99th=[82314] 00:45:34.790 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1751.58, stdev=74.55, samples=19 00:45:34.790 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:34.790 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.790 cpu : usr=98.40%, sys=1.14%, ctx=12, majf=0, minf=1635 00:45:34.790 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.790 filename1: (groupid=0, jobs=1): err= 0: pid=4007211: Tue Dec 10 12:48:40 2024 00:45:34.790 read: IOPS=440, BW=1761KiB/s (1803kB/s)(17.2MiB/10030msec) 00:45:34.790 slat (usec): min=5, max=122, avg=42.49, stdev=24.25 00:45:34.790 clat (usec): min=20599, max=71647, avg=36021.28, stdev=2408.47 00:45:34.790 lat (usec): min=20616, max=71667, avg=36063.77, stdev=2406.02 00:45:34.790 clat percentiles (usec): 00:45:34.790 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:45:34.790 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.790 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.790 | 99.00th=[37487], 99.50th=[38011], 99.90th=[71828], 99.95th=[71828], 00:45:34.790 | 99.99th=[71828] 00:45:34.790 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1758.32, stdev=57.91, samples=19 00:45:34.790 iops : min= 416, max= 448, avg=439.58, stdev=14.48, samples=19 00:45:34.790 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.790 cpu : usr=98.35%, sys=1.22%, ctx=15, majf=0, minf=1631 00:45:34.790 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.790 filename1: (groupid=0, jobs=1): err= 0: pid=4007212: Tue Dec 10 12:48:40 2024 00:45:34.790 read: IOPS=441, BW=1764KiB/s (1807kB/s)(17.2MiB/10012msec) 00:45:34.790 slat (usec): min=7, max=100, avg=18.44, stdev= 7.69 00:45:34.790 clat (usec): min=20654, max=49201, avg=36123.56, stdev=1375.82 00:45:34.790 lat (usec): min=20691, max=49225, avg=36142.01, stdev=1375.08 00:45:34.790 clat percentiles (usec): 00:45:34.790 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:34.790 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.790 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.790 | 99.00th=[38011], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:45:34.790 | 99.99th=[49021] 00:45:34.790 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1760.00, stdev=56.87, samples=20 00:45:34.790 iops : min= 416, max= 448, avg=440.00, stdev=14.22, samples=20 00:45:34.790 lat (msec) : 50=100.00% 00:45:34.790 cpu : usr=98.62%, sys=0.96%, ctx=15, majf=0, minf=1634 00:45:34.790 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.790 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.790 filename1: (groupid=0, jobs=1): err= 0: pid=4007213: Tue Dec 10 12:48:40 2024 00:45:34.790 read: IOPS=441, BW=1764KiB/s (1807kB/s)(17.3MiB/10016msec) 00:45:34.790 slat (usec): min=5, max=141, avg=19.36, stdev=12.43 00:45:34.790 clat (usec): min=14667, max=96452, avg=36071.01, stdev=5394.76 00:45:34.790 lat (usec): min=14687, max=96471, avg=36090.36, stdev=5393.71 00:45:34.790 clat percentiles (usec): 00:45:34.790 | 1.00th=[17171], 5.00th=[33162], 10.00th=[35390], 20.00th=[35914], 00:45:34.790 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.790 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.790 | 99.00th=[55837], 99.50th=[60031], 99.90th=[95945], 99.95th=[95945], 00:45:34.790 | 99.99th=[95945] 00:45:34.790 bw ( KiB/s): min= 1408, max= 1888, per=4.15%, avg=1759.16, stdev=102.38, samples=19 00:45:34.790 iops : min= 352, max= 472, avg=439.79, stdev=25.59, samples=19 00:45:34.790 lat (msec) : 20=1.36%, 50=96.70%, 100=1.95% 00:45:34.790 cpu : usr=98.71%, sys=0.87%, ctx=15, majf=0, minf=1632 00:45:34.790 IO depths : 1=5.0%, 2=10.8%, 4=23.6%, 8=53.0%, 16=7.5%, 32=0.0%, >=64=0.0% 00:45:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 issued rwts: total=4418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.791 filename1: (groupid=0, jobs=1): err= 0: pid=4007214: Tue Dec 10 12:48:40 2024 00:45:34.791 read: IOPS=441, BW=1764KiB/s (1807kB/s)(17.2MiB/10011msec) 00:45:34.791 slat (usec): min=5, max=124, avg=45.45, stdev=23.83 00:45:34.791 clat (usec): min=20763, max=64515, avg=35846.86, stdev=1795.98 00:45:34.791 lat (usec): min=20798, max=64537, avg=35892.30, stdev=1796.40 00:45:34.791 clat percentiles (usec): 00:45:34.791 | 1.00th=[32637], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:45:34.791 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.791 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36963], 00:45:34.791 | 99.00th=[37487], 99.50th=[46924], 99.90th=[52691], 99.95th=[52691], 00:45:34.791 | 99.99th=[64750] 00:45:34.791 bw ( KiB/s): min= 1664, max= 1792, per=4.17%, avg=1765.05, stdev=53.61, samples=19 00:45:34.791 iops : min= 416, max= 448, avg=441.26, stdev=13.40, samples=19 00:45:34.791 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.791 cpu : usr=98.35%, sys=1.22%, ctx=15, majf=0, minf=1634 00:45:34.791 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:34.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.791 filename2: (groupid=0, jobs=1): err= 0: pid=4007215: Tue Dec 10 12:48:40 2024 00:45:34.791 read: IOPS=439, BW=1758KiB/s (1801kB/s)(17.2MiB/10009msec) 00:45:34.791 slat (nsec): min=7234, max=57166, avg=25636.90, stdev=7605.49 00:45:34.791 clat (usec): min=26151, max=82324, avg=36162.07, stdev=2917.99 00:45:34.791 lat (usec): min=26168, max=82349, avg=36187.70, stdev=2917.23 00:45:34.791 clat percentiles (usec): 00:45:34.791 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:34.791 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.791 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.791 | 99.00th=[36963], 99.50th=[37487], 99.90th=[82314], 99.95th=[82314], 00:45:34.791 | 99.99th=[82314] 00:45:34.791 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1751.58, stdev=74.55, samples=19 00:45:34.791 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:34.791 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.791 cpu : usr=98.38%, sys=1.19%, ctx=13, majf=0, minf=1633 00:45:34.791 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:34.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.791 filename2: (groupid=0, jobs=1): err= 0: pid=4007216: Tue Dec 10 12:48:40 2024 00:45:34.791 read: IOPS=439, BW=1758KiB/s (1801kB/s)(17.2MiB/10009msec) 00:45:34.791 slat (usec): min=4, max=134, avg=42.62, stdev=23.43 00:45:34.791 clat (usec): min=21039, max=86655, avg=35952.18, stdev=3220.28 00:45:34.791 lat (usec): min=21066, max=86671, avg=35994.80, stdev=3219.43 00:45:34.791 clat percentiles (usec): 00:45:34.791 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:45:34.791 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.791 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:34.791 | 99.00th=[37487], 99.50th=[37487], 99.90th=[86508], 99.95th=[86508], 00:45:34.791 | 99.99th=[86508] 00:45:34.791 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1751.58, stdev=74.55, samples=19 00:45:34.791 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:34.791 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.791 cpu : usr=98.46%, sys=1.12%, ctx=14, majf=0, minf=1634 00:45:34.791 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:34.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.791 filename2: (groupid=0, jobs=1): err= 0: pid=4007217: Tue Dec 10 12:48:40 2024 00:45:34.791 read: IOPS=440, BW=1764KiB/s (1806kB/s)(17.2MiB/10014msec) 00:45:34.791 slat (usec): min=3, max=119, avg=20.70, stdev=11.86 00:45:34.791 clat (usec): min=20559, max=48686, avg=36116.93, stdev=1436.82 00:45:34.791 lat (usec): min=20594, max=48703, avg=36137.63, stdev=1435.46 00:45:34.791 clat percentiles (usec): 00:45:34.791 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:34.791 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.791 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[36963], 00:45:34.791 | 99.00th=[38011], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:45:34.791 | 99.99th=[48497] 00:45:34.791 bw ( KiB/s): min= 1664, max= 1808, per=4.16%, avg=1760.00, stdev=57.10, samples=20 00:45:34.791 iops : min= 416, max= 452, avg=440.00, stdev=14.28, samples=20 00:45:34.791 lat (msec) : 50=100.00% 00:45:34.791 cpu : usr=98.34%, sys=1.24%, ctx=15, majf=0, minf=1636 00:45:34.791 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:34.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.791 filename2: (groupid=0, jobs=1): err= 0: pid=4007218: Tue Dec 10 12:48:40 2024 00:45:34.791 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10005msec) 00:45:34.791 slat (usec): min=7, max=131, avg=31.40, stdev=10.27 00:45:34.791 clat (usec): min=24212, max=67693, avg=36094.25, stdev=2084.29 00:45:34.791 lat (usec): min=24227, max=67718, avg=36125.64, stdev=2083.09 00:45:34.791 clat percentiles (usec): 00:45:34.791 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:45:34.791 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.791 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.791 | 99.00th=[37487], 99.50th=[45351], 99.90th=[67634], 99.95th=[67634], 00:45:34.791 | 99.99th=[67634] 00:45:34.791 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1758.32, stdev=71.93, samples=19 00:45:34.791 iops : min= 384, max= 448, avg=439.58, stdev=17.98, samples=19 00:45:34.791 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.791 cpu : usr=98.63%, sys=0.97%, ctx=15, majf=0, minf=1632 00:45:34.791 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:34.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.791 filename2: (groupid=0, jobs=1): err= 0: pid=4007219: Tue Dec 10 12:48:40 2024 00:45:34.791 read: IOPS=440, BW=1762KiB/s (1804kB/s)(17.2MiB/10004msec) 00:45:34.791 slat (usec): min=7, max=133, avg=21.38, stdev=16.35 00:45:34.791 clat (usec): min=17940, max=76724, avg=36127.66, stdev=3045.22 00:45:34.791 lat (usec): min=17953, max=76750, avg=36149.04, stdev=3044.33 00:45:34.791 clat percentiles (usec): 00:45:34.791 | 1.00th=[26608], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:45:34.791 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.791 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.791 | 99.00th=[38011], 99.50th=[57934], 99.90th=[77071], 99.95th=[77071], 00:45:34.791 | 99.99th=[77071] 00:45:34.791 bw ( KiB/s): min= 1536, max= 1856, per=4.16%, avg=1760.84, stdev=74.76, samples=19 00:45:34.791 iops : min= 384, max= 464, avg=440.21, stdev=18.69, samples=19 00:45:34.791 lat (msec) : 20=0.05%, 50=99.32%, 100=0.64% 00:45:34.791 cpu : usr=98.43%, sys=1.13%, ctx=12, majf=0, minf=1634 00:45:34.791 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:34.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 issued rwts: total=4406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.791 filename2: (groupid=0, jobs=1): err= 0: pid=4007220: Tue Dec 10 12:48:40 2024 00:45:34.791 read: IOPS=459, BW=1837KiB/s (1881kB/s)(18.0MiB/10027msec) 00:45:34.791 slat (nsec): min=5193, max=53480, avg=16288.12, stdev=6880.55 00:45:34.791 clat (usec): min=811, max=55220, avg=34705.72, stdev=5347.26 00:45:34.791 lat (usec): min=820, max=55230, avg=34722.01, stdev=5348.52 00:45:34.791 clat percentiles (usec): 00:45:34.791 | 1.00th=[ 7046], 5.00th=[22676], 10.00th=[35390], 20.00th=[35914], 00:45:34.791 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.791 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.791 | 99.00th=[36963], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:45:34.791 | 99.99th=[55313] 00:45:34.791 bw ( KiB/s): min= 1664, max= 3040, per=4.33%, avg=1835.20, stdev=290.38, samples=20 00:45:34.791 iops : min= 416, max= 760, avg=458.80, stdev=72.60, samples=20 00:45:34.791 lat (usec) : 1000=0.04% 00:45:34.791 lat (msec) : 2=0.35%, 10=1.35%, 20=1.61%, 50=96.61%, 100=0.04% 00:45:34.791 cpu : usr=98.26%, sys=1.30%, ctx=36, majf=0, minf=1635 00:45:34.791 IO depths : 1=5.7%, 2=11.4%, 4=23.6%, 8=52.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:45:34.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.791 issued rwts: total=4604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.791 filename2: (groupid=0, jobs=1): err= 0: pid=4007221: Tue Dec 10 12:48:40 2024 00:45:34.791 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10006msec) 00:45:34.791 slat (nsec): min=8286, max=74645, avg=32464.63, stdev=13025.49 00:45:34.791 clat (usec): min=23990, max=80233, avg=36088.38, stdev=2831.49 00:45:34.791 lat (usec): min=24035, max=80256, avg=36120.84, stdev=2830.29 00:45:34.791 clat percentiles (usec): 00:45:34.791 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:45:34.791 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.791 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.791 | 99.00th=[36963], 99.50th=[37487], 99.90th=[80217], 99.95th=[80217], 00:45:34.791 | 99.99th=[80217] 00:45:34.791 bw ( KiB/s): min= 1539, max= 1792, per=4.13%, avg=1751.74, stdev=74.07, samples=19 00:45:34.791 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:34.791 lat (msec) : 50=99.64%, 100=0.36% 00:45:34.791 cpu : usr=98.33%, sys=1.23%, ctx=15, majf=0, minf=1635 00:45:34.792 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:34.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.792 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.792 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.792 filename2: (groupid=0, jobs=1): err= 0: pid=4007222: Tue Dec 10 12:48:40 2024 00:45:34.792 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10002msec) 00:45:34.792 slat (nsec): min=5337, max=71866, avg=21573.66, stdev=10021.02 00:45:34.792 clat (usec): min=4580, max=41593, avg=35668.92, stdev=3325.57 00:45:34.792 lat (usec): min=4590, max=41629, avg=35690.49, stdev=3325.93 00:45:34.792 clat percentiles (usec): 00:45:34.792 | 1.00th=[10945], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:34.792 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:34.792 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:34.792 | 99.00th=[37487], 99.50th=[37487], 99.90th=[41157], 99.95th=[41681], 00:45:34.792 | 99.99th=[41681] 00:45:34.792 bw ( KiB/s): min= 1664, max= 2180, per=4.21%, avg=1785.47, stdev=109.36, samples=19 00:45:34.792 iops : min= 416, max= 545, avg=446.37, stdev=27.34, samples=19 00:45:34.792 lat (msec) : 10=0.72%, 20=0.72%, 50=98.57% 00:45:34.792 cpu : usr=98.23%, sys=1.32%, ctx=15, majf=0, minf=1636 00:45:34.792 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:34.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.792 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:34.792 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:34.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:34.792 00:45:34.792 Run status group 0 (all jobs): 00:45:34.792 READ: bw=41.4MiB/s (43.4MB/s), 1756KiB/s-1837KiB/s (1798kB/s-1881kB/s), io=415MiB (435MB), run=10001-10030msec 00:45:35.052 ----------------------------------------------------- 00:45:35.052 Suppressions used: 00:45:35.052 count bytes template 00:45:35.052 45 402 /usr/src/fio/parse.c 00:45:35.052 1 8 libtcmalloc_minimal.so 00:45:35.052 1 904 libcrypto.so 00:45:35.052 ----------------------------------------------------- 00:45:35.052 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 bdev_null0 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 [2024-12-10 12:48:41.799046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 bdev_null1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:35.052 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:35.052 { 00:45:35.052 "params": { 00:45:35.052 "name": "Nvme$subsystem", 00:45:35.052 "trtype": "$TEST_TRANSPORT", 00:45:35.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:35.052 "adrfam": "ipv4", 00:45:35.052 "trsvcid": "$NVMF_PORT", 00:45:35.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:35.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:35.052 "hdgst": ${hdgst:-false}, 00:45:35.053 "ddgst": ${ddgst:-false} 00:45:35.053 }, 00:45:35.053 "method": "bdev_nvme_attach_controller" 00:45:35.053 } 00:45:35.053 EOF 00:45:35.053 )") 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:35.053 { 00:45:35.053 "params": { 00:45:35.053 "name": "Nvme$subsystem", 00:45:35.053 "trtype": "$TEST_TRANSPORT", 00:45:35.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:35.053 "adrfam": "ipv4", 00:45:35.053 "trsvcid": "$NVMF_PORT", 00:45:35.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:35.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:35.053 "hdgst": ${hdgst:-false}, 00:45:35.053 "ddgst": ${ddgst:-false} 00:45:35.053 }, 00:45:35.053 "method": "bdev_nvme_attach_controller" 00:45:35.053 } 00:45:35.053 EOF 00:45:35.053 )") 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:35.053 12:48:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:35.053 "params": { 00:45:35.053 "name": "Nvme0", 00:45:35.053 "trtype": "tcp", 00:45:35.053 "traddr": "10.0.0.2", 00:45:35.053 "adrfam": "ipv4", 00:45:35.053 "trsvcid": "4420", 00:45:35.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:35.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:35.053 "hdgst": false, 00:45:35.053 "ddgst": false 00:45:35.053 }, 00:45:35.053 "method": "bdev_nvme_attach_controller" 00:45:35.053 },{ 00:45:35.053 "params": { 00:45:35.053 "name": "Nvme1", 00:45:35.053 "trtype": "tcp", 00:45:35.053 "traddr": "10.0.0.2", 00:45:35.053 "adrfam": "ipv4", 00:45:35.053 "trsvcid": "4420", 00:45:35.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:35.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:35.053 "hdgst": false, 00:45:35.053 "ddgst": false 00:45:35.053 }, 00:45:35.053 "method": "bdev_nvme_attach_controller" 00:45:35.053 }' 00:45:35.348 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:35.348 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:35.348 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:35.348 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:35.348 12:48:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:35.611 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:35.611 ... 00:45:35.611 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:35.611 ... 00:45:35.611 fio-3.35 00:45:35.611 Starting 4 threads 00:45:42.194 00:45:42.194 filename0: (groupid=0, jobs=1): err= 0: pid=4009332: Tue Dec 10 12:48:48 2024 00:45:42.194 read: IOPS=2428, BW=19.0MiB/s (19.9MB/s)(94.9MiB/5002msec) 00:45:42.194 slat (nsec): min=7086, max=36541, avg=10551.98, stdev=3538.95 00:45:42.194 clat (usec): min=1216, max=45358, avg=3260.90, stdev=1177.72 00:45:42.194 lat (usec): min=1231, max=45395, avg=3271.45, stdev=1177.69 00:45:42.194 clat percentiles (usec): 00:45:42.194 | 1.00th=[ 2073], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2868], 00:45:42.194 | 30.00th=[ 2933], 40.00th=[ 3130], 50.00th=[ 3294], 60.00th=[ 3458], 00:45:42.194 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3654], 95.00th=[ 3916], 00:45:42.194 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[ 5669], 99.95th=[45351], 00:45:42.194 | 99.99th=[45351] 00:45:42.194 bw ( KiB/s): min=16432, max=21424, per=26.80%, avg=19416.89, stdev=1750.45, samples=9 00:45:42.194 iops : min= 2054, max= 2678, avg=2427.11, stdev=218.81, samples=9 00:45:42.194 lat (msec) : 2=0.73%, 4=95.03%, 10=4.17%, 50=0.07% 00:45:42.194 cpu : usr=96.00%, sys=3.58%, ctx=6, majf=0, minf=1633 00:45:42.194 IO depths : 1=0.3%, 2=7.2%, 4=64.4%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:42.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.194 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.194 issued rwts: total=12145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.194 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:42.194 filename0: (groupid=0, jobs=1): err= 0: pid=4009333: Tue Dec 10 12:48:48 2024 00:45:42.194 read: IOPS=2204, BW=17.2MiB/s (18.1MB/s)(86.1MiB/5002msec) 00:45:42.194 slat (nsec): min=6975, max=34323, avg=11104.58, stdev=3729.67 00:45:42.194 clat (usec): min=687, max=6190, avg=3598.20, stdev=469.90 00:45:42.194 lat (usec): min=701, max=6197, avg=3609.31, stdev=469.49 00:45:42.194 clat percentiles (usec): 00:45:42.194 | 1.00th=[ 2474], 5.00th=[ 2900], 10.00th=[ 3163], 20.00th=[ 3425], 00:45:42.195 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3523], 00:45:42.195 | 70.00th=[ 3621], 80.00th=[ 3851], 90.00th=[ 4228], 95.00th=[ 4424], 00:45:42.195 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 5932], 99.95th=[ 6063], 00:45:42.195 | 99.99th=[ 6194] 00:45:42.195 bw ( KiB/s): min=16896, max=18320, per=24.39%, avg=17674.67, stdev=567.89, samples=9 00:45:42.195 iops : min= 2112, max= 2290, avg=2209.33, stdev=70.99, samples=9 00:45:42.195 lat (usec) : 750=0.02%, 1000=0.01% 00:45:42.195 lat (msec) : 2=0.44%, 4=83.89%, 10=15.64% 00:45:42.195 cpu : usr=96.18%, sys=3.44%, ctx=8, majf=0, minf=1632 00:45:42.195 IO depths : 1=0.1%, 2=2.5%, 4=67.2%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:42.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.195 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.195 issued rwts: total=11026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:42.195 filename1: (groupid=0, jobs=1): err= 0: pid=4009334: Tue Dec 10 12:48:48 2024 00:45:42.195 read: IOPS=2171, BW=17.0MiB/s (17.8MB/s)(84.9MiB/5001msec) 00:45:42.195 slat (nsec): min=7081, max=37822, avg=10723.12, stdev=3878.45 00:45:42.195 clat (usec): min=544, max=6490, avg=3651.05, stdev=486.12 00:45:42.195 lat (usec): min=551, max=6505, avg=3661.77, stdev=485.78 00:45:42.195 clat percentiles (usec): 00:45:42.195 | 1.00th=[ 2573], 5.00th=[ 3097], 10.00th=[ 3294], 20.00th=[ 3458], 00:45:42.195 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:45:42.195 | 70.00th=[ 3687], 80.00th=[ 3884], 90.00th=[ 4228], 95.00th=[ 4424], 00:45:42.195 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6128], 99.95th=[ 6194], 00:45:42.195 | 99.99th=[ 6390] 00:45:42.195 bw ( KiB/s): min=16512, max=18256, per=24.02%, avg=17406.22, stdev=682.09, samples=9 00:45:42.195 iops : min= 2064, max= 2282, avg=2175.78, stdev=85.26, samples=9 00:45:42.195 lat (usec) : 750=0.04%, 1000=0.09% 00:45:42.195 lat (msec) : 2=0.20%, 4=82.04%, 10=17.63% 00:45:42.195 cpu : usr=95.94%, sys=3.64%, ctx=8, majf=0, minf=1633 00:45:42.195 IO depths : 1=0.1%, 2=2.5%, 4=69.6%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:42.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.195 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.195 issued rwts: total=10862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:42.195 filename1: (groupid=0, jobs=1): err= 0: pid=4009335: Tue Dec 10 12:48:48 2024 00:45:42.195 read: IOPS=2253, BW=17.6MiB/s (18.5MB/s)(88.1MiB/5002msec) 00:45:42.195 slat (nsec): min=7106, max=35245, avg=11041.61, stdev=3733.44 00:45:42.195 clat (usec): min=1220, max=6617, avg=3517.80, stdev=460.80 00:45:42.195 lat (usec): min=1232, max=6640, avg=3528.85, stdev=460.52 00:45:42.195 clat percentiles (usec): 00:45:42.195 | 1.00th=[ 2376], 5.00th=[ 2769], 10.00th=[ 2966], 20.00th=[ 3228], 00:45:42.195 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:45:42.195 | 70.00th=[ 3556], 80.00th=[ 3752], 90.00th=[ 4113], 95.00th=[ 4359], 00:45:42.195 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5604], 99.95th=[ 5997], 00:45:42.195 | 99.99th=[ 6521] 00:45:42.195 bw ( KiB/s): min=17296, max=18672, per=24.79%, avg=17962.67, stdev=549.97, samples=9 00:45:42.195 iops : min= 2162, max= 2334, avg=2245.33, stdev=68.75, samples=9 00:45:42.195 lat (msec) : 2=0.20%, 4=86.59%, 10=13.21% 00:45:42.195 cpu : usr=95.72%, sys=3.86%, ctx=7, majf=0, minf=1633 00:45:42.195 IO depths : 1=0.2%, 2=2.5%, 4=68.4%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:42.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.195 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.195 issued rwts: total=11272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:42.195 00:45:42.195 Run status group 0 (all jobs): 00:45:42.195 READ: bw=70.8MiB/s (74.2MB/s), 17.0MiB/s-19.0MiB/s (17.8MB/s-19.9MB/s), io=354MiB (371MB), run=5001-5002msec 00:45:42.763 ----------------------------------------------------- 00:45:42.763 Suppressions used: 00:45:42.763 count bytes template 00:45:42.763 6 52 /usr/src/fio/parse.c 00:45:42.763 1 8 libtcmalloc_minimal.so 00:45:42.763 1 904 libcrypto.so 00:45:42.763 ----------------------------------------------------- 00:45:42.763 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.763 00:45:42.763 real 0m28.236s 00:45:42.763 user 4m56.728s 00:45:42.763 sys 0m5.923s 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 ************************************ 00:45:42.763 END TEST fio_dif_rand_params 00:45:42.763 ************************************ 00:45:42.763 12:48:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:42.763 12:48:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:42.763 12:48:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 ************************************ 00:45:42.763 START TEST fio_dif_digest 00:45:42.763 ************************************ 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 bdev_null0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:42.763 [2024-12-10 12:48:49.559718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:42.763 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:42.764 { 00:45:42.764 "params": { 00:45:42.764 "name": "Nvme$subsystem", 00:45:42.764 "trtype": "$TEST_TRANSPORT", 00:45:42.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:42.764 "adrfam": "ipv4", 00:45:42.764 "trsvcid": "$NVMF_PORT", 00:45:42.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:42.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:42.764 "hdgst": ${hdgst:-false}, 00:45:42.764 "ddgst": ${ddgst:-false} 00:45:42.764 }, 00:45:42.764 "method": "bdev_nvme_attach_controller" 00:45:42.764 } 00:45:42.764 EOF 00:45:42.764 )") 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:42.764 12:48:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:42.764 "params": { 00:45:42.764 "name": "Nvme0", 00:45:42.764 "trtype": "tcp", 00:45:42.764 "traddr": "10.0.0.2", 00:45:42.764 "adrfam": "ipv4", 00:45:42.764 "trsvcid": "4420", 00:45:42.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:42.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:42.764 "hdgst": true, 00:45:42.764 "ddgst": true 00:45:42.764 }, 00:45:42.764 "method": "bdev_nvme_attach_controller" 00:45:42.764 }' 00:45:43.053 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:43.053 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:43.053 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:45:43.053 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:43.053 12:48:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:43.313 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:43.313 ... 00:45:43.313 fio-3.35 00:45:43.313 Starting 3 threads 00:45:55.521 00:45:55.521 filename0: (groupid=0, jobs=1): err= 0: pid=4010581: Tue Dec 10 12:49:00 2024 00:45:55.521 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(309MiB/10045msec) 00:45:55.521 slat (usec): min=7, max=199, avg=14.24, stdev= 4.09 00:45:55.521 clat (usec): min=7131, max=50900, avg=12175.34, stdev=1413.43 00:45:55.521 lat (usec): min=7144, max=50913, avg=12189.58, stdev=1413.46 00:45:55.521 clat percentiles (usec): 00:45:55.521 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:45:55.521 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:45:55.521 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13566], 00:45:55.521 | 99.00th=[14615], 99.50th=[15139], 99.90th=[17433], 99.95th=[48497], 00:45:55.521 | 99.99th=[51119] 00:45:55.521 bw ( KiB/s): min=29440, max=32256, per=35.67%, avg=31564.80, stdev=752.57, samples=20 00:45:55.521 iops : min= 230, max= 252, avg=246.60, stdev= 5.88, samples=20 00:45:55.521 lat (msec) : 10=0.73%, 20=99.19%, 50=0.04%, 100=0.04% 00:45:55.521 cpu : usr=94.21%, sys=5.43%, ctx=19, majf=0, minf=1636 00:45:55.521 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:55.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:55.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:55.521 issued rwts: total=2468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:55.521 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:55.521 filename0: (groupid=0, jobs=1): err= 0: pid=4010582: Tue Dec 10 12:49:00 2024 00:45:55.521 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(285MiB/10044msec) 00:45:55.521 slat (nsec): min=7800, max=70673, avg=14095.84, stdev=2047.43 00:45:55.521 clat (usec): min=10290, max=53460, avg=13169.09, stdev=2000.82 00:45:55.521 lat (usec): min=10304, max=53474, avg=13183.19, stdev=2000.92 00:45:55.521 clat percentiles (usec): 00:45:55.521 | 1.00th=[10945], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:45:55.521 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:45:55.521 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:45:55.521 | 99.00th=[15401], 99.50th=[15664], 99.90th=[52167], 99.95th=[53216], 00:45:55.521 | 99.99th=[53216] 00:45:55.521 bw ( KiB/s): min=26880, max=30208, per=32.98%, avg=29184.00, stdev=621.54, samples=20 00:45:55.521 iops : min= 210, max= 236, avg=228.00, stdev= 4.86, samples=20 00:45:55.521 lat (msec) : 20=99.78%, 50=0.04%, 100=0.18% 00:45:55.521 cpu : usr=95.15%, sys=4.49%, ctx=20, majf=0, minf=1632 00:45:55.521 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:55.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:55.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:55.521 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:55.521 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:55.521 filename0: (groupid=0, jobs=1): err= 0: pid=4010583: Tue Dec 10 12:49:00 2024 00:45:55.521 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(274MiB/10045msec) 00:45:55.521 slat (nsec): min=7569, max=35813, avg=14343.33, stdev=1641.94 00:45:55.521 clat (usec): min=8813, max=51096, avg=13698.45, stdev=1411.19 00:45:55.521 lat (usec): min=8828, max=51110, avg=13712.79, stdev=1411.33 00:45:55.521 clat percentiles (usec): 00:45:55.521 | 1.00th=[11469], 5.00th=[12256], 10.00th=[12518], 20.00th=[12911], 00:45:55.521 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:45:55.521 | 70.00th=[14091], 80.00th=[14484], 90.00th=[14877], 95.00th=[15270], 00:45:55.521 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16712], 99.95th=[45876], 00:45:55.521 | 99.99th=[51119] 00:45:55.521 bw ( KiB/s): min=27392, max=29696, per=31.71%, avg=28057.60, stdev=618.21, samples=20 00:45:55.521 iops : min= 214, max= 232, avg=219.20, stdev= 4.83, samples=20 00:45:55.521 lat (msec) : 10=0.32%, 20=99.59%, 50=0.05%, 100=0.05% 00:45:55.521 cpu : usr=94.95%, sys=4.69%, ctx=16, majf=0, minf=1636 00:45:55.521 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:55.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:55.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:55.521 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:55.521 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:55.521 00:45:55.521 Run status group 0 (all jobs): 00:45:55.521 READ: bw=86.4MiB/s (90.6MB/s), 27.3MiB/s-30.7MiB/s (28.6MB/s-32.2MB/s), io=868MiB (910MB), run=10044-10045msec 00:45:55.521 ----------------------------------------------------- 00:45:55.521 Suppressions used: 00:45:55.521 count bytes template 00:45:55.521 5 44 /usr/src/fio/parse.c 00:45:55.521 1 8 libtcmalloc_minimal.so 00:45:55.521 1 904 libcrypto.so 00:45:55.521 ----------------------------------------------------- 00:45:55.521 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:55.521 00:45:55.521 real 0m12.579s 00:45:55.521 user 0m36.677s 00:45:55.521 sys 0m2.048s 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:55.521 12:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:55.521 ************************************ 00:45:55.521 END TEST fio_dif_digest 00:45:55.521 ************************************ 00:45:55.521 12:49:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:55.521 12:49:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:55.521 12:49:02 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:55.521 12:49:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:55.521 12:49:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:55.521 12:49:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:55.521 12:49:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:55.521 12:49:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:55.521 rmmod nvme_tcp 00:45:55.521 rmmod nvme_fabrics 00:45:55.521 rmmod nvme_keyring 00:45:55.521 12:49:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:55.521 12:49:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:55.522 12:49:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:55.522 12:49:02 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 4000811 ']' 00:45:55.522 12:49:02 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 4000811 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 4000811 ']' 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 4000811 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4000811 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4000811' 00:45:55.522 killing process with pid 4000811 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 4000811 00:45:55.522 12:49:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 4000811 00:45:56.900 12:49:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:56.900 12:49:03 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:59.433 Waiting for block devices as requested 00:45:59.433 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:45:59.433 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:59.433 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:59.433 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:59.433 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:59.433 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:59.433 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:59.433 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:59.692 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:59.692 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:59.692 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:59.951 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:59.951 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:59.951 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:59.951 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:00.209 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:00.209 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:00.209 12:49:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:00.209 12:49:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:00.209 12:49:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:02.747 12:49:09 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:02.747 00:46:02.747 real 1m22.115s 00:46:02.747 user 7m27.269s 00:46:02.747 sys 0m20.749s 00:46:02.747 12:49:09 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:02.747 12:49:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:02.747 ************************************ 00:46:02.747 END TEST nvmf_dif 00:46:02.747 ************************************ 00:46:02.747 12:49:09 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:02.747 12:49:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:02.747 12:49:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:02.747 12:49:09 -- common/autotest_common.sh@10 -- # set +x 00:46:02.747 ************************************ 00:46:02.747 START TEST nvmf_abort_qd_sizes 00:46:02.747 ************************************ 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:02.747 * Looking for test storage... 00:46:02.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:02.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:02.747 --rc genhtml_branch_coverage=1 00:46:02.747 --rc genhtml_function_coverage=1 00:46:02.747 --rc genhtml_legend=1 00:46:02.747 --rc geninfo_all_blocks=1 00:46:02.747 --rc geninfo_unexecuted_blocks=1 00:46:02.747 00:46:02.747 ' 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:02.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:02.747 --rc genhtml_branch_coverage=1 00:46:02.747 --rc genhtml_function_coverage=1 00:46:02.747 --rc genhtml_legend=1 00:46:02.747 --rc geninfo_all_blocks=1 00:46:02.747 --rc geninfo_unexecuted_blocks=1 00:46:02.747 00:46:02.747 ' 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:02.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:02.747 --rc genhtml_branch_coverage=1 00:46:02.747 --rc genhtml_function_coverage=1 00:46:02.747 --rc genhtml_legend=1 00:46:02.747 --rc geninfo_all_blocks=1 00:46:02.747 --rc geninfo_unexecuted_blocks=1 00:46:02.747 00:46:02.747 ' 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:02.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:02.747 --rc genhtml_branch_coverage=1 00:46:02.747 --rc genhtml_function_coverage=1 00:46:02.747 --rc genhtml_legend=1 00:46:02.747 --rc geninfo_all_blocks=1 00:46:02.747 --rc geninfo_unexecuted_blocks=1 00:46:02.747 00:46:02.747 ' 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:02.747 12:49:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:02.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:46:02.748 12:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:08.021 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:46:08.022 Found 0000:af:00.0 (0x8086 - 0x159b) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:46:08.022 Found 0000:af:00.1 (0x8086 - 0x159b) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:46:08.022 Found net devices under 0000:af:00.0: cvl_0_0 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:46:08.022 Found net devices under 0000:af:00.1: cvl_0_1 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:08.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:08.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:46:08.022 00:46:08.022 --- 10.0.0.2 ping statistics --- 00:46:08.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:08.022 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:08.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:08.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:46:08.022 00:46:08.022 --- 10.0.0.1 ping statistics --- 00:46:08.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:08.022 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:08.022 12:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:10.584 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:10.584 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:11.152 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=4018569 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 4018569 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 4018569 ']' 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:11.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:11.411 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:11.411 [2024-12-10 12:49:18.172404] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:46:11.411 [2024-12-10 12:49:18.172497] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:11.670 [2024-12-10 12:49:18.289899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:11.670 [2024-12-10 12:49:18.400512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:11.670 [2024-12-10 12:49:18.400559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:11.670 [2024-12-10 12:49:18.400570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:11.670 [2024-12-10 12:49:18.400580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:11.670 [2024-12-10 12:49:18.400587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:11.670 [2024-12-10 12:49:18.402817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:11.670 [2024-12-10 12:49:18.402893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:11.670 [2024-12-10 12:49:18.402955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:11.670 [2024-12-10 12:49:18.402966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:12.238 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:12.238 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:46:12.238 12:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:12.238 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:12.238 12:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:12.238 12:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:12.239 12:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:12.239 12:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:12.498 ************************************ 00:46:12.498 START TEST spdk_target_abort 00:46:12.498 ************************************ 00:46:12.498 12:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:46:12.498 12:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:12.498 12:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:46:12.498 12:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:12.498 12:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:15.783 spdk_targetn1 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:15.783 [2024-12-10 12:49:21.949720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:15.783 12:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:15.783 [2024-12-10 12:49:21.998259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:15.783 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:15.784 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:15.784 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:15.784 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:15.784 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:15.784 12:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:19.071 Initializing NVMe Controllers 00:46:19.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:19.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:19.071 Initialization complete. Launching workers. 00:46:19.071 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14166, failed: 0 00:46:19.071 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1269, failed to submit 12897 00:46:19.071 success 777, unsuccessful 492, failed 0 00:46:19.072 12:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:19.072 12:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:22.360 Initializing NVMe Controllers 00:46:22.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:22.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:22.360 Initialization complete. Launching workers. 00:46:22.360 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8494, failed: 0 00:46:22.360 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7247 00:46:22.360 success 334, unsuccessful 913, failed 0 00:46:22.360 12:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:22.360 12:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:25.650 Initializing NVMe Controllers 00:46:25.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:25.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:25.650 Initialization complete. Launching workers. 00:46:25.650 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33613, failed: 0 00:46:25.650 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2820, failed to submit 30793 00:46:25.650 success 556, unsuccessful 2264, failed 0 00:46:25.650 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:25.650 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.650 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:25.650 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.650 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:25.650 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.650 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4018569 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 4018569 ']' 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 4018569 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4018569 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4018569' 00:46:26.587 killing process with pid 4018569 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 4018569 00:46:26.587 12:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 4018569 00:46:27.525 00:46:27.525 real 0m15.181s 00:46:27.525 user 0m59.523s 00:46:27.525 sys 0m2.645s 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:27.525 ************************************ 00:46:27.525 END TEST spdk_target_abort 00:46:27.525 ************************************ 00:46:27.525 12:49:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:27.525 12:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:27.525 12:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:27.525 12:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:27.525 ************************************ 00:46:27.525 START TEST kernel_target_abort 00:46:27.525 ************************************ 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:46:27.525 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:46:27.784 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:27.784 12:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:30.320 Waiting for block devices as requested 00:46:30.321 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:30.321 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:30.321 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:30.321 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:30.579 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:30.579 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:30.579 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:30.579 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:30.838 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:30.838 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:30.838 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:31.097 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:31.097 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:31.097 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:31.097 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:31.357 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:31.357 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:31.925 No valid GPT data, bailing 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:46:31.925 00:46:31.925 Discovery Log Number of Records 2, Generation counter 2 00:46:31.925 =====Discovery Log Entry 0====== 00:46:31.925 trtype: tcp 00:46:31.925 adrfam: ipv4 00:46:31.925 subtype: current discovery subsystem 00:46:31.925 treq: not specified, sq flow control disable supported 00:46:31.925 portid: 1 00:46:31.925 trsvcid: 4420 00:46:31.925 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:31.925 traddr: 10.0.0.1 00:46:31.925 eflags: none 00:46:31.925 sectype: none 00:46:31.925 =====Discovery Log Entry 1====== 00:46:31.925 trtype: tcp 00:46:31.925 adrfam: ipv4 00:46:31.925 subtype: nvme subsystem 00:46:31.925 treq: not specified, sq flow control disable supported 00:46:31.925 portid: 1 00:46:31.925 trsvcid: 4420 00:46:31.925 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:31.925 traddr: 10.0.0.1 00:46:31.925 eflags: none 00:46:31.925 sectype: none 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:31.925 12:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:35.213 Initializing NVMe Controllers 00:46:35.213 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:35.213 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:35.213 Initialization complete. Launching workers. 00:46:35.213 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80798, failed: 0 00:46:35.213 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80798, failed to submit 0 00:46:35.213 success 0, unsuccessful 80798, failed 0 00:46:35.213 12:49:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:35.213 12:49:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:38.498 Initializing NVMe Controllers 00:46:38.498 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:38.498 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:38.498 Initialization complete. Launching workers. 00:46:38.498 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 128346, failed: 0 00:46:38.498 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32182, failed to submit 96164 00:46:38.498 success 0, unsuccessful 32182, failed 0 00:46:38.498 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:38.498 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:41.782 Initializing NVMe Controllers 00:46:41.782 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:41.782 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:41.782 Initialization complete. Launching workers. 00:46:41.782 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 121521, failed: 0 00:46:41.782 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30398, failed to submit 91123 00:46:41.782 success 0, unsuccessful 30398, failed 0 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:46:41.782 12:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:44.316 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:44.316 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:44.317 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:44.317 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:44.317 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:44.317 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:44.317 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:44.317 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:44.883 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:46:45.142 00:46:45.142 real 0m17.504s 00:46:45.142 user 0m9.202s 00:46:45.142 sys 0m5.118s 00:46:45.142 12:49:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:45.142 12:49:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:45.142 ************************************ 00:46:45.142 END TEST kernel_target_abort 00:46:45.142 ************************************ 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:45.142 rmmod nvme_tcp 00:46:45.142 rmmod nvme_fabrics 00:46:45.142 rmmod nvme_keyring 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 4018569 ']' 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 4018569 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 4018569 ']' 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 4018569 00:46:45.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4018569) - No such process 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 4018569 is not found' 00:46:45.142 Process with pid 4018569 is not found 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:45.142 12:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:47.719 Waiting for block devices as requested 00:46:47.719 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:47.719 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:47.978 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:47.978 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:47.978 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:47.978 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:48.236 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:48.236 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:48.236 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:48.236 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:48.496 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:48.496 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:48.496 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:48.496 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:48.754 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:48.754 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:48.754 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:49.013 12:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:50.918 12:49:57 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:50.918 00:46:50.918 real 0m48.540s 00:46:50.918 user 1m12.846s 00:46:50.918 sys 0m15.523s 00:46:50.918 12:49:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:50.918 12:49:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:50.918 ************************************ 00:46:50.918 END TEST nvmf_abort_qd_sizes 00:46:50.918 ************************************ 00:46:50.918 12:49:57 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:50.918 12:49:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:50.918 12:49:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:50.918 12:49:57 -- common/autotest_common.sh@10 -- # set +x 00:46:50.918 ************************************ 00:46:50.918 START TEST keyring_file 00:46:50.918 ************************************ 00:46:50.918 12:49:57 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:51.177 * Looking for test storage... 00:46:51.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:51.177 12:49:57 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:51.177 12:49:57 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:46:51.177 12:49:57 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:51.177 12:49:57 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:51.177 12:49:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:51.177 12:49:57 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:51.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.178 --rc genhtml_branch_coverage=1 00:46:51.178 --rc genhtml_function_coverage=1 00:46:51.178 --rc genhtml_legend=1 00:46:51.178 --rc geninfo_all_blocks=1 00:46:51.178 --rc geninfo_unexecuted_blocks=1 00:46:51.178 00:46:51.178 ' 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:51.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.178 --rc genhtml_branch_coverage=1 00:46:51.178 --rc genhtml_function_coverage=1 00:46:51.178 --rc genhtml_legend=1 00:46:51.178 --rc geninfo_all_blocks=1 00:46:51.178 --rc geninfo_unexecuted_blocks=1 00:46:51.178 00:46:51.178 ' 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:51.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.178 --rc genhtml_branch_coverage=1 00:46:51.178 --rc genhtml_function_coverage=1 00:46:51.178 --rc genhtml_legend=1 00:46:51.178 --rc geninfo_all_blocks=1 00:46:51.178 --rc geninfo_unexecuted_blocks=1 00:46:51.178 00:46:51.178 ' 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:51.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.178 --rc genhtml_branch_coverage=1 00:46:51.178 --rc genhtml_function_coverage=1 00:46:51.178 --rc genhtml_legend=1 00:46:51.178 --rc geninfo_all_blocks=1 00:46:51.178 --rc geninfo_unexecuted_blocks=1 00:46:51.178 00:46:51.178 ' 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:51.178 12:49:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:51.178 12:49:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:51.178 12:49:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:51.178 12:49:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:51.178 12:49:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:51.178 12:49:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:51.178 12:49:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:51.178 12:49:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:51.178 12:49:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:51.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u9KAV6afMc 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u9KAV6afMc 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u9KAV6afMc 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.u9KAV6afMc 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u14iKSstWB 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:51.178 12:49:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u14iKSstWB 00:46:51.178 12:49:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u14iKSstWB 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.u14iKSstWB 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=4027509 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4027509 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4027509 ']' 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:51.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:51.178 12:49:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:51.178 12:49:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:51.437 [2024-12-10 12:49:58.057230] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:46:51.437 [2024-12-10 12:49:58.057324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027509 ] 00:46:51.437 [2024-12-10 12:49:58.168755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:51.696 [2024-12-10 12:49:58.279830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:52.263 12:49:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:52.263 12:49:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:52.263 12:49:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:52.263 12:49:59 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.263 12:49:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:52.263 [2024-12-10 12:49:59.072816] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:52.523 null0 00:46:52.523 [2024-12-10 12:49:59.104856] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:52.523 [2024-12-10 12:49:59.105202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:52.523 12:49:59 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:52.523 [2024-12-10 12:49:59.132920] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:52.523 request: 00:46:52.523 { 00:46:52.523 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:52.523 "secure_channel": false, 00:46:52.523 "listen_address": { 00:46:52.523 "trtype": "tcp", 00:46:52.523 "traddr": "127.0.0.1", 00:46:52.523 "trsvcid": "4420" 00:46:52.523 }, 00:46:52.523 "method": "nvmf_subsystem_add_listener", 00:46:52.523 "req_id": 1 00:46:52.523 } 00:46:52.523 Got JSON-RPC error response 00:46:52.523 response: 00:46:52.523 { 00:46:52.523 "code": -32602, 00:46:52.523 "message": "Invalid parameters" 00:46:52.523 } 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:52.523 12:49:59 keyring_file -- keyring/file.sh@47 -- # bperfpid=4027669 00:46:52.523 12:49:59 keyring_file -- keyring/file.sh@49 -- # waitforlisten 4027669 /var/tmp/bperf.sock 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4027669 ']' 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:52.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:52.523 12:49:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:52.523 12:49:59 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:52.523 [2024-12-10 12:49:59.207572] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:46:52.523 [2024-12-10 12:49:59.207661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027669 ] 00:46:52.523 [2024-12-10 12:49:59.320343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:52.782 [2024-12-10 12:49:59.432035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:53.350 12:49:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:53.350 12:49:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:53.350 12:49:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u9KAV6afMc 00:46:53.350 12:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u9KAV6afMc 00:46:53.609 12:50:00 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.u14iKSstWB 00:46:53.609 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.u14iKSstWB 00:46:53.609 12:50:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:53.609 12:50:00 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:53.609 12:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:53.609 12:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:53.609 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.930 12:50:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.u9KAV6afMc == \/\t\m\p\/\t\m\p\.\u\9\K\A\V\6\a\f\M\c ]] 00:46:53.930 12:50:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:53.930 12:50:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:53.930 12:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:53.930 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.930 12:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:54.189 12:50:00 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.u14iKSstWB == \/\t\m\p\/\t\m\p\.\u\1\4\i\K\S\s\t\W\B ]] 00:46:54.189 12:50:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.189 12:50:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:54.189 12:50:00 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:54.189 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.448 12:50:01 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:54.448 12:50:01 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.448 12:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.706 [2024-12-10 12:50:01.308736] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:54.706 nvme0n1 00:46:54.706 12:50:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:54.706 12:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:54.706 12:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:54.706 12:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.706 12:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.706 12:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.965 12:50:01 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:54.965 12:50:01 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:54.965 12:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:54.965 12:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:54.965 12:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.965 12:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:54.965 12:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.965 12:50:01 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:54.965 12:50:01 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:55.223 Running I/O for 1 seconds... 00:46:56.159 15135.00 IOPS, 59.12 MiB/s 00:46:56.159 Latency(us) 00:46:56.159 [2024-12-10T11:50:02.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:56.159 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:56.159 nvme0n1 : 1.01 15182.38 59.31 0.00 0.00 8412.27 5274.09 18225.25 00:46:56.159 [2024-12-10T11:50:02.985Z] =================================================================================================================== 00:46:56.159 [2024-12-10T11:50:02.985Z] Total : 15182.38 59.31 0.00 0.00 8412.27 5274.09 18225.25 00:46:56.159 { 00:46:56.159 "results": [ 00:46:56.159 { 00:46:56.159 "job": "nvme0n1", 00:46:56.159 "core_mask": "0x2", 00:46:56.159 "workload": "randrw", 00:46:56.159 "percentage": 50, 00:46:56.159 "status": "finished", 00:46:56.159 "queue_depth": 128, 00:46:56.159 "io_size": 4096, 00:46:56.159 "runtime": 1.00531, 00:46:56.159 "iops": 15182.381553948533, 00:46:56.159 "mibps": 59.30617794511146, 00:46:56.159 "io_failed": 0, 00:46:56.159 "io_timeout": 0, 00:46:56.159 "avg_latency_us": 8412.270012198813, 00:46:56.159 "min_latency_us": 5274.087619047619, 00:46:56.159 "max_latency_us": 18225.249523809525 00:46:56.159 } 00:46:56.159 ], 00:46:56.159 "core_count": 1 00:46:56.159 } 00:46:56.159 12:50:02 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:56.159 12:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:56.419 12:50:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:56.419 12:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:56.419 12:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:56.419 12:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:56.419 12:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:56.419 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.677 12:50:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:56.678 12:50:03 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:56.678 12:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:56.678 12:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:56.678 12:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:56.678 12:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:56.678 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.678 12:50:03 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:56.678 12:50:03 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:56.678 12:50:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:56.678 12:50:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:56.678 12:50:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:56.678 12:50:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:56.678 12:50:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:56.678 12:50:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:56.678 12:50:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:56.678 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:56.936 [2024-12-10 12:50:03.660866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:56.936 [2024-12-10 12:50:03.661654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (107): Transport endpoint is not connected 00:46:56.936 [2024-12-10 12:50:03.662637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (9): Bad file descriptor 00:46:56.936 [2024-12-10 12:50:03.663635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:56.936 [2024-12-10 12:50:03.663656] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:56.936 [2024-12-10 12:50:03.663668] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:56.936 [2024-12-10 12:50:03.663681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:56.936 request: 00:46:56.936 { 00:46:56.936 "name": "nvme0", 00:46:56.936 "trtype": "tcp", 00:46:56.936 "traddr": "127.0.0.1", 00:46:56.936 "adrfam": "ipv4", 00:46:56.936 "trsvcid": "4420", 00:46:56.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:56.936 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:56.936 "prchk_reftag": false, 00:46:56.936 "prchk_guard": false, 00:46:56.936 "hdgst": false, 00:46:56.936 "ddgst": false, 00:46:56.936 "psk": "key1", 00:46:56.936 "allow_unrecognized_csi": false, 00:46:56.936 "method": "bdev_nvme_attach_controller", 00:46:56.936 "req_id": 1 00:46:56.936 } 00:46:56.936 Got JSON-RPC error response 00:46:56.936 response: 00:46:56.936 { 00:46:56.936 "code": -5, 00:46:56.936 "message": "Input/output error" 00:46:56.937 } 00:46:56.937 12:50:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:56.937 12:50:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:56.937 12:50:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:56.937 12:50:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:56.937 12:50:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:56.937 12:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:56.937 12:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:56.937 12:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:56.937 12:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:56.937 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:57.195 12:50:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:57.195 12:50:03 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:57.195 12:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:57.195 12:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:57.195 12:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:57.195 12:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:57.195 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:57.497 12:50:04 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:57.497 12:50:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:57.497 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:57.497 12:50:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:57.497 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:57.754 12:50:04 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:57.754 12:50:04 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:57.754 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:58.013 12:50:04 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:58.013 12:50:04 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.u9KAV6afMc 00:46:58.013 12:50:04 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.u9KAV6afMc 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.u9KAV6afMc 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u9KAV6afMc 00:46:58.013 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u9KAV6afMc 00:46:58.013 [2024-12-10 12:50:04.818203] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.u9KAV6afMc': 0100660 00:46:58.013 [2024-12-10 12:50:04.818237] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:58.013 request: 00:46:58.013 { 00:46:58.013 "name": "key0", 00:46:58.013 "path": "/tmp/tmp.u9KAV6afMc", 00:46:58.013 "method": "keyring_file_add_key", 00:46:58.013 "req_id": 1 00:46:58.013 } 00:46:58.013 Got JSON-RPC error response 00:46:58.013 response: 00:46:58.013 { 00:46:58.013 "code": -1, 00:46:58.013 "message": "Operation not permitted" 00:46:58.013 } 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:58.013 12:50:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:58.013 12:50:04 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.u9KAV6afMc 00:46:58.013 12:50:04 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u9KAV6afMc 00:46:58.013 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u9KAV6afMc 00:46:58.271 12:50:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.u9KAV6afMc 00:46:58.271 12:50:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:58.271 12:50:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:58.271 12:50:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:58.271 12:50:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:58.271 12:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:58.271 12:50:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:58.529 12:50:05 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:58.529 12:50:05 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:58.529 12:50:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:58.529 12:50:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:58.529 12:50:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:58.529 12:50:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:58.529 12:50:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:58.529 12:50:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:58.529 12:50:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:58.529 12:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:58.787 [2024-12-10 12:50:05.383751] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.u9KAV6afMc': No such file or directory 00:46:58.787 [2024-12-10 12:50:05.383786] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:58.787 [2024-12-10 12:50:05.383806] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:58.787 [2024-12-10 12:50:05.383817] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:58.787 [2024-12-10 12:50:05.383828] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:58.787 [2024-12-10 12:50:05.383838] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:58.787 request: 00:46:58.787 { 00:46:58.787 "name": "nvme0", 00:46:58.787 "trtype": "tcp", 00:46:58.787 "traddr": "127.0.0.1", 00:46:58.787 "adrfam": "ipv4", 00:46:58.787 "trsvcid": "4420", 00:46:58.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:58.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:58.787 "prchk_reftag": false, 00:46:58.787 "prchk_guard": false, 00:46:58.787 "hdgst": false, 00:46:58.787 "ddgst": false, 00:46:58.788 "psk": "key0", 00:46:58.788 "allow_unrecognized_csi": false, 00:46:58.788 "method": "bdev_nvme_attach_controller", 00:46:58.788 "req_id": 1 00:46:58.788 } 00:46:58.788 Got JSON-RPC error response 00:46:58.788 response: 00:46:58.788 { 00:46:58.788 "code": -19, 00:46:58.788 "message": "No such device" 00:46:58.788 } 00:46:58.788 12:50:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:58.788 12:50:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:58.788 12:50:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:58.788 12:50:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:58.788 12:50:05 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:58.788 12:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:58.788 12:50:05 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:58.788 12:50:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:58.788 12:50:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:58.788 12:50:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:58.788 12:50:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:58.788 12:50:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:58.788 12:50:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uDY6GG9t9S 00:46:58.788 12:50:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:58.788 12:50:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:58.788 12:50:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:58.788 12:50:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:58.788 12:50:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:58.788 12:50:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:58.788 12:50:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:59.046 12:50:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uDY6GG9t9S 00:46:59.046 12:50:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uDY6GG9t9S 00:46:59.046 12:50:05 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.uDY6GG9t9S 00:46:59.046 12:50:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uDY6GG9t9S 00:46:59.046 12:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uDY6GG9t9S 00:46:59.047 12:50:05 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:59.047 12:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:59.305 nvme0n1 00:46:59.305 12:50:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:59.305 12:50:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:59.305 12:50:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:59.305 12:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:59.305 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:59.305 12:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:59.564 12:50:06 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:59.564 12:50:06 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:59.564 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:59.822 12:50:06 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:59.822 12:50:06 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:59.822 12:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:59.822 12:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:59.822 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:00.080 12:50:06 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:47:00.080 12:50:06 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:47:00.080 12:50:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:00.080 12:50:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:00.080 12:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:00.080 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:00.080 12:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:00.080 12:50:06 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:47:00.080 12:50:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:00.080 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:00.339 12:50:07 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:47:00.339 12:50:07 keyring_file -- keyring/file.sh@105 -- # jq length 00:47:00.339 12:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:00.597 12:50:07 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:47:00.597 12:50:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uDY6GG9t9S 00:47:00.597 12:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uDY6GG9t9S 00:47:00.855 12:50:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.u14iKSstWB 00:47:00.855 12:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.u14iKSstWB 00:47:00.855 12:50:07 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:00.855 12:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:01.113 nvme0n1 00:47:01.113 12:50:07 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:47:01.113 12:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:47:01.371 12:50:08 keyring_file -- keyring/file.sh@113 -- # config='{ 00:47:01.371 "subsystems": [ 00:47:01.371 { 00:47:01.371 "subsystem": "keyring", 00:47:01.371 "config": [ 00:47:01.371 { 00:47:01.371 "method": "keyring_file_add_key", 00:47:01.371 "params": { 00:47:01.371 "name": "key0", 00:47:01.371 "path": "/tmp/tmp.uDY6GG9t9S" 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "keyring_file_add_key", 00:47:01.371 "params": { 00:47:01.371 "name": "key1", 00:47:01.371 "path": "/tmp/tmp.u14iKSstWB" 00:47:01.371 } 00:47:01.371 } 00:47:01.371 ] 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "subsystem": "iobuf", 00:47:01.371 "config": [ 00:47:01.371 { 00:47:01.371 "method": "iobuf_set_options", 00:47:01.371 "params": { 00:47:01.371 "small_pool_count": 8192, 00:47:01.371 "large_pool_count": 1024, 00:47:01.371 "small_bufsize": 8192, 00:47:01.371 "large_bufsize": 135168, 00:47:01.371 "enable_numa": false 00:47:01.371 } 00:47:01.371 } 00:47:01.371 ] 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "subsystem": "sock", 00:47:01.371 "config": [ 00:47:01.371 { 00:47:01.371 "method": "sock_set_default_impl", 00:47:01.371 "params": { 00:47:01.371 "impl_name": "posix" 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "sock_impl_set_options", 00:47:01.371 "params": { 00:47:01.371 "impl_name": "ssl", 00:47:01.371 "recv_buf_size": 4096, 00:47:01.371 "send_buf_size": 4096, 00:47:01.371 "enable_recv_pipe": true, 00:47:01.371 "enable_quickack": false, 00:47:01.371 "enable_placement_id": 0, 00:47:01.371 "enable_zerocopy_send_server": true, 00:47:01.371 "enable_zerocopy_send_client": false, 00:47:01.371 "zerocopy_threshold": 0, 00:47:01.371 "tls_version": 0, 00:47:01.371 "enable_ktls": false 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "sock_impl_set_options", 00:47:01.371 "params": { 00:47:01.371 "impl_name": "posix", 00:47:01.371 "recv_buf_size": 2097152, 00:47:01.371 "send_buf_size": 2097152, 00:47:01.371 "enable_recv_pipe": true, 00:47:01.371 "enable_quickack": false, 00:47:01.371 "enable_placement_id": 0, 00:47:01.371 "enable_zerocopy_send_server": true, 00:47:01.371 "enable_zerocopy_send_client": false, 00:47:01.371 "zerocopy_threshold": 0, 00:47:01.371 "tls_version": 0, 00:47:01.371 "enable_ktls": false 00:47:01.371 } 00:47:01.371 } 00:47:01.371 ] 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "subsystem": "vmd", 00:47:01.371 "config": [] 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "subsystem": "accel", 00:47:01.371 "config": [ 00:47:01.371 { 00:47:01.371 "method": "accel_set_options", 00:47:01.371 "params": { 00:47:01.371 "small_cache_size": 128, 00:47:01.371 "large_cache_size": 16, 00:47:01.371 "task_count": 2048, 00:47:01.371 "sequence_count": 2048, 00:47:01.371 "buf_count": 2048 00:47:01.371 } 00:47:01.371 } 00:47:01.371 ] 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "subsystem": "bdev", 00:47:01.371 "config": [ 00:47:01.371 { 00:47:01.371 "method": "bdev_set_options", 00:47:01.371 "params": { 00:47:01.371 "bdev_io_pool_size": 65535, 00:47:01.371 "bdev_io_cache_size": 256, 00:47:01.371 "bdev_auto_examine": true, 00:47:01.371 "iobuf_small_cache_size": 128, 00:47:01.371 "iobuf_large_cache_size": 16 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "bdev_raid_set_options", 00:47:01.371 "params": { 00:47:01.371 "process_window_size_kb": 1024, 00:47:01.371 "process_max_bandwidth_mb_sec": 0 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "bdev_iscsi_set_options", 00:47:01.371 "params": { 00:47:01.371 "timeout_sec": 30 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "bdev_nvme_set_options", 00:47:01.371 "params": { 00:47:01.371 "action_on_timeout": "none", 00:47:01.371 "timeout_us": 0, 00:47:01.371 "timeout_admin_us": 0, 00:47:01.371 "keep_alive_timeout_ms": 10000, 00:47:01.371 "arbitration_burst": 0, 00:47:01.371 "low_priority_weight": 0, 00:47:01.371 "medium_priority_weight": 0, 00:47:01.371 "high_priority_weight": 0, 00:47:01.371 "nvme_adminq_poll_period_us": 10000, 00:47:01.371 "nvme_ioq_poll_period_us": 0, 00:47:01.371 "io_queue_requests": 512, 00:47:01.371 "delay_cmd_submit": true, 00:47:01.371 "transport_retry_count": 4, 00:47:01.371 "bdev_retry_count": 3, 00:47:01.371 "transport_ack_timeout": 0, 00:47:01.371 "ctrlr_loss_timeout_sec": 0, 00:47:01.371 "reconnect_delay_sec": 0, 00:47:01.371 "fast_io_fail_timeout_sec": 0, 00:47:01.371 "disable_auto_failback": false, 00:47:01.371 "generate_uuids": false, 00:47:01.371 "transport_tos": 0, 00:47:01.371 "nvme_error_stat": false, 00:47:01.371 "rdma_srq_size": 0, 00:47:01.371 "io_path_stat": false, 00:47:01.371 "allow_accel_sequence": false, 00:47:01.371 "rdma_max_cq_size": 0, 00:47:01.371 "rdma_cm_event_timeout_ms": 0, 00:47:01.371 "dhchap_digests": [ 00:47:01.371 "sha256", 00:47:01.371 "sha384", 00:47:01.371 "sha512" 00:47:01.371 ], 00:47:01.371 "dhchap_dhgroups": [ 00:47:01.371 "null", 00:47:01.371 "ffdhe2048", 00:47:01.371 "ffdhe3072", 00:47:01.371 "ffdhe4096", 00:47:01.371 "ffdhe6144", 00:47:01.371 "ffdhe8192" 00:47:01.371 ] 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "bdev_nvme_attach_controller", 00:47:01.371 "params": { 00:47:01.371 "name": "nvme0", 00:47:01.371 "trtype": "TCP", 00:47:01.371 "adrfam": "IPv4", 00:47:01.371 "traddr": "127.0.0.1", 00:47:01.371 "trsvcid": "4420", 00:47:01.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:01.371 "prchk_reftag": false, 00:47:01.371 "prchk_guard": false, 00:47:01.371 "ctrlr_loss_timeout_sec": 0, 00:47:01.371 "reconnect_delay_sec": 0, 00:47:01.371 "fast_io_fail_timeout_sec": 0, 00:47:01.371 "psk": "key0", 00:47:01.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:01.371 "hdgst": false, 00:47:01.371 "ddgst": false, 00:47:01.371 "multipath": "multipath" 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "bdev_nvme_set_hotplug", 00:47:01.371 "params": { 00:47:01.371 "period_us": 100000, 00:47:01.371 "enable": false 00:47:01.371 } 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "method": "bdev_wait_for_examine" 00:47:01.371 } 00:47:01.371 ] 00:47:01.371 }, 00:47:01.371 { 00:47:01.371 "subsystem": "nbd", 00:47:01.371 "config": [] 00:47:01.371 } 00:47:01.371 ] 00:47:01.371 }' 00:47:01.371 12:50:08 keyring_file -- keyring/file.sh@115 -- # killprocess 4027669 00:47:01.371 12:50:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4027669 ']' 00:47:01.371 12:50:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4027669 00:47:01.371 12:50:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:01.371 12:50:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:01.372 12:50:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4027669 00:47:01.372 12:50:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:01.372 12:50:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:01.372 12:50:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4027669' 00:47:01.372 killing process with pid 4027669 00:47:01.372 12:50:08 keyring_file -- common/autotest_common.sh@973 -- # kill 4027669 00:47:01.372 Received shutdown signal, test time was about 1.000000 seconds 00:47:01.372 00:47:01.372 Latency(us) 00:47:01.372 [2024-12-10T11:50:08.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:01.372 [2024-12-10T11:50:08.198Z] =================================================================================================================== 00:47:01.372 [2024-12-10T11:50:08.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:01.372 12:50:08 keyring_file -- common/autotest_common.sh@978 -- # wait 4027669 00:47:02.307 12:50:09 keyring_file -- keyring/file.sh@118 -- # bperfpid=4029371 00:47:02.307 12:50:09 keyring_file -- keyring/file.sh@120 -- # waitforlisten 4029371 /var/tmp/bperf.sock 00:47:02.307 12:50:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4029371 ']' 00:47:02.307 12:50:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:02.307 12:50:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:02.307 12:50:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:02.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:02.307 12:50:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:02.307 12:50:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:02.307 12:50:09 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:47:02.307 12:50:09 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:47:02.307 "subsystems": [ 00:47:02.307 { 00:47:02.307 "subsystem": "keyring", 00:47:02.307 "config": [ 00:47:02.307 { 00:47:02.307 "method": "keyring_file_add_key", 00:47:02.307 "params": { 00:47:02.307 "name": "key0", 00:47:02.307 "path": "/tmp/tmp.uDY6GG9t9S" 00:47:02.307 } 00:47:02.307 }, 00:47:02.307 { 00:47:02.307 "method": "keyring_file_add_key", 00:47:02.307 "params": { 00:47:02.307 "name": "key1", 00:47:02.307 "path": "/tmp/tmp.u14iKSstWB" 00:47:02.307 } 00:47:02.307 } 00:47:02.307 ] 00:47:02.307 }, 00:47:02.307 { 00:47:02.307 "subsystem": "iobuf", 00:47:02.307 "config": [ 00:47:02.307 { 00:47:02.307 "method": "iobuf_set_options", 00:47:02.307 "params": { 00:47:02.307 "small_pool_count": 8192, 00:47:02.307 "large_pool_count": 1024, 00:47:02.307 "small_bufsize": 8192, 00:47:02.307 "large_bufsize": 135168, 00:47:02.307 "enable_numa": false 00:47:02.307 } 00:47:02.307 } 00:47:02.307 ] 00:47:02.307 }, 00:47:02.307 { 00:47:02.307 "subsystem": "sock", 00:47:02.307 "config": [ 00:47:02.307 { 00:47:02.307 "method": "sock_set_default_impl", 00:47:02.307 "params": { 00:47:02.307 "impl_name": "posix" 00:47:02.307 } 00:47:02.307 }, 00:47:02.307 { 00:47:02.307 "method": "sock_impl_set_options", 00:47:02.307 "params": { 00:47:02.307 "impl_name": "ssl", 00:47:02.307 "recv_buf_size": 4096, 00:47:02.307 "send_buf_size": 4096, 00:47:02.307 "enable_recv_pipe": true, 00:47:02.307 "enable_quickack": false, 00:47:02.307 "enable_placement_id": 0, 00:47:02.307 "enable_zerocopy_send_server": true, 00:47:02.307 "enable_zerocopy_send_client": false, 00:47:02.307 "zerocopy_threshold": 0, 00:47:02.307 "tls_version": 0, 00:47:02.307 "enable_ktls": false 00:47:02.307 } 00:47:02.307 }, 00:47:02.307 { 00:47:02.307 "method": "sock_impl_set_options", 00:47:02.307 "params": { 00:47:02.307 "impl_name": "posix", 00:47:02.307 "recv_buf_size": 2097152, 00:47:02.307 "send_buf_size": 2097152, 00:47:02.307 "enable_recv_pipe": true, 00:47:02.307 "enable_quickack": false, 00:47:02.307 "enable_placement_id": 0, 00:47:02.307 "enable_zerocopy_send_server": true, 00:47:02.307 "enable_zerocopy_send_client": false, 00:47:02.307 "zerocopy_threshold": 0, 00:47:02.307 "tls_version": 0, 00:47:02.307 "enable_ktls": false 00:47:02.308 } 00:47:02.308 } 00:47:02.308 ] 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "subsystem": "vmd", 00:47:02.308 "config": [] 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "subsystem": "accel", 00:47:02.308 "config": [ 00:47:02.308 { 00:47:02.308 "method": "accel_set_options", 00:47:02.308 "params": { 00:47:02.308 "small_cache_size": 128, 00:47:02.308 "large_cache_size": 16, 00:47:02.308 "task_count": 2048, 00:47:02.308 "sequence_count": 2048, 00:47:02.308 "buf_count": 2048 00:47:02.308 } 00:47:02.308 } 00:47:02.308 ] 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "subsystem": "bdev", 00:47:02.308 "config": [ 00:47:02.308 { 00:47:02.308 "method": "bdev_set_options", 00:47:02.308 "params": { 00:47:02.308 "bdev_io_pool_size": 65535, 00:47:02.308 "bdev_io_cache_size": 256, 00:47:02.308 "bdev_auto_examine": true, 00:47:02.308 "iobuf_small_cache_size": 128, 00:47:02.308 "iobuf_large_cache_size": 16 00:47:02.308 } 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "method": "bdev_raid_set_options", 00:47:02.308 "params": { 00:47:02.308 "process_window_size_kb": 1024, 00:47:02.308 "process_max_bandwidth_mb_sec": 0 00:47:02.308 } 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "method": "bdev_iscsi_set_options", 00:47:02.308 "params": { 00:47:02.308 "timeout_sec": 30 00:47:02.308 } 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "method": "bdev_nvme_set_options", 00:47:02.308 "params": { 00:47:02.308 "action_on_timeout": "none", 00:47:02.308 "timeout_us": 0, 00:47:02.308 "timeout_admin_us": 0, 00:47:02.308 "keep_alive_timeout_ms": 10000, 00:47:02.308 "arbitration_burst": 0, 00:47:02.308 "low_priority_weight": 0, 00:47:02.308 "medium_priority_weight": 0, 00:47:02.308 "high_priority_weight": 0, 00:47:02.308 "nvme_adminq_poll_period_us": 10000, 00:47:02.308 "nvme_ioq_poll_period_us": 0, 00:47:02.308 "io_queue_requests": 512, 00:47:02.308 "delay_cmd_submit": true, 00:47:02.308 "transport_retry_count": 4, 00:47:02.308 "bdev_retry_count": 3, 00:47:02.308 "transport_ack_timeout": 0, 00:47:02.308 "ctrlr_loss_timeout_sec": 0, 00:47:02.308 "reconnect_delay_sec": 0, 00:47:02.308 "fast_io_fail_timeout_sec": 0, 00:47:02.308 "disable_auto_failback": false, 00:47:02.308 "generate_uuids": false, 00:47:02.308 "transport_tos": 0, 00:47:02.308 "nvme_error_stat": false, 00:47:02.308 "rdma_srq_size": 0, 00:47:02.308 "io_path_stat": false, 00:47:02.308 "allow_accel_sequence": false, 00:47:02.308 "rdma_max_cq_size": 0, 00:47:02.308 "rdma_cm_event_timeout_ms": 0, 00:47:02.308 "dhchap_digests": [ 00:47:02.308 "sha256", 00:47:02.308 "sha384", 00:47:02.308 "sha512" 00:47:02.308 ], 00:47:02.308 "dhchap_dhgroups": [ 00:47:02.308 "null", 00:47:02.308 "ffdhe2048", 00:47:02.308 "ffdhe3072", 00:47:02.308 "ffdhe4096", 00:47:02.308 "ffdhe6144", 00:47:02.308 "ffdhe8192" 00:47:02.308 ] 00:47:02.308 } 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "method": "bdev_nvme_attach_controller", 00:47:02.308 "params": { 00:47:02.308 "name": "nvme0", 00:47:02.308 "trtype": "TCP", 00:47:02.308 "adrfam": "IPv4", 00:47:02.308 "traddr": "127.0.0.1", 00:47:02.308 "trsvcid": "4420", 00:47:02.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:02.308 "prchk_reftag": false, 00:47:02.308 "prchk_guard": false, 00:47:02.308 "ctrlr_loss_timeout_sec": 0, 00:47:02.308 "reconnect_delay_sec": 0, 00:47:02.308 "fast_io_fail_timeout_sec": 0, 00:47:02.308 "psk": "key0", 00:47:02.308 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:02.308 "hdgst": false, 00:47:02.308 "ddgst": false, 00:47:02.308 "multipath": "multipath" 00:47:02.308 } 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "method": "bdev_nvme_set_hotplug", 00:47:02.308 "params": { 00:47:02.308 "period_us": 100000, 00:47:02.308 "enable": false 00:47:02.308 } 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "method": "bdev_wait_for_examine" 00:47:02.308 } 00:47:02.308 ] 00:47:02.308 }, 00:47:02.308 { 00:47:02.308 "subsystem": "nbd", 00:47:02.308 "config": [] 00:47:02.308 } 00:47:02.308 ] 00:47:02.308 }' 00:47:02.567 [2024-12-10 12:50:09.135427] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:47:02.567 [2024-12-10 12:50:09.135512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029371 ] 00:47:02.567 [2024-12-10 12:50:09.246217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:02.567 [2024-12-10 12:50:09.355463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:03.133 [2024-12-10 12:50:09.781158] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:03.133 12:50:09 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:03.133 12:50:09 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:03.133 12:50:09 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:47:03.133 12:50:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:03.133 12:50:09 keyring_file -- keyring/file.sh@121 -- # jq length 00:47:03.391 12:50:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:47:03.391 12:50:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:47:03.391 12:50:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:03.391 12:50:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:03.391 12:50:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:03.391 12:50:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:03.391 12:50:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:03.649 12:50:10 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:47:03.649 12:50:10 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:47:03.649 12:50:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:03.649 12:50:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:03.649 12:50:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:03.649 12:50:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:03.649 12:50:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:03.907 12:50:10 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:47:03.907 12:50:10 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:47:03.908 12:50:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:47:03.908 12:50:10 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:47:03.908 12:50:10 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:47:03.908 12:50:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:47:03.908 12:50:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.uDY6GG9t9S /tmp/tmp.u14iKSstWB 00:47:03.908 12:50:10 keyring_file -- keyring/file.sh@20 -- # killprocess 4029371 00:47:03.908 12:50:10 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4029371 ']' 00:47:03.908 12:50:10 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4029371 00:47:03.908 12:50:10 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:03.908 12:50:10 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:03.908 12:50:10 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4029371 00:47:04.166 12:50:10 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:04.166 12:50:10 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:04.166 12:50:10 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4029371' 00:47:04.166 killing process with pid 4029371 00:47:04.166 12:50:10 keyring_file -- common/autotest_common.sh@973 -- # kill 4029371 00:47:04.166 Received shutdown signal, test time was about 1.000000 seconds 00:47:04.166 00:47:04.166 Latency(us) 00:47:04.166 [2024-12-10T11:50:10.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:04.166 [2024-12-10T11:50:10.992Z] =================================================================================================================== 00:47:04.166 [2024-12-10T11:50:10.992Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:04.166 12:50:10 keyring_file -- common/autotest_common.sh@978 -- # wait 4029371 00:47:05.101 12:50:11 keyring_file -- keyring/file.sh@21 -- # killprocess 4027509 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4027509 ']' 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4027509 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4027509 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4027509' 00:47:05.101 killing process with pid 4027509 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@973 -- # kill 4027509 00:47:05.101 12:50:11 keyring_file -- common/autotest_common.sh@978 -- # wait 4027509 00:47:07.646 00:47:07.646 real 0m16.282s 00:47:07.646 user 0m35.377s 00:47:07.646 sys 0m2.907s 00:47:07.646 12:50:13 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:07.646 12:50:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:07.646 ************************************ 00:47:07.646 END TEST keyring_file 00:47:07.646 ************************************ 00:47:07.646 12:50:14 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:47:07.646 12:50:14 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:07.646 12:50:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:07.646 12:50:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:07.646 12:50:14 -- common/autotest_common.sh@10 -- # set +x 00:47:07.646 ************************************ 00:47:07.646 START TEST keyring_linux 00:47:07.646 ************************************ 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:07.646 Joined session keyring: 302151880 00:47:07.646 * Looking for test storage... 00:47:07.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@345 -- # : 1 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@368 -- # return 0 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:07.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:07.646 --rc genhtml_branch_coverage=1 00:47:07.646 --rc genhtml_function_coverage=1 00:47:07.646 --rc genhtml_legend=1 00:47:07.646 --rc geninfo_all_blocks=1 00:47:07.646 --rc geninfo_unexecuted_blocks=1 00:47:07.646 00:47:07.646 ' 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:07.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:07.646 --rc genhtml_branch_coverage=1 00:47:07.646 --rc genhtml_function_coverage=1 00:47:07.646 --rc genhtml_legend=1 00:47:07.646 --rc geninfo_all_blocks=1 00:47:07.646 --rc geninfo_unexecuted_blocks=1 00:47:07.646 00:47:07.646 ' 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:07.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:07.646 --rc genhtml_branch_coverage=1 00:47:07.646 --rc genhtml_function_coverage=1 00:47:07.646 --rc genhtml_legend=1 00:47:07.646 --rc geninfo_all_blocks=1 00:47:07.646 --rc geninfo_unexecuted_blocks=1 00:47:07.646 00:47:07.646 ' 00:47:07.646 12:50:14 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:07.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:07.646 --rc genhtml_branch_coverage=1 00:47:07.646 --rc genhtml_function_coverage=1 00:47:07.646 --rc genhtml_legend=1 00:47:07.646 --rc geninfo_all_blocks=1 00:47:07.646 --rc geninfo_unexecuted_blocks=1 00:47:07.646 00:47:07.646 ' 00:47:07.646 12:50:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:07.646 12:50:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:07.646 12:50:14 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:07.646 12:50:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:07.646 12:50:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:07.646 12:50:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:07.646 12:50:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:07.646 12:50:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:07.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:07.646 12:50:14 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:07.646 12:50:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:07.646 12:50:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:07.646 12:50:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:07.646 12:50:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:07.646 12:50:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:07.647 12:50:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:07.647 12:50:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:07.647 /tmp/:spdk-test:key0 00:47:07.647 12:50:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:07.647 12:50:14 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:07.647 12:50:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:07.647 /tmp/:spdk-test:key1 00:47:07.647 12:50:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4030350 00:47:07.647 12:50:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4030350 00:47:07.647 12:50:14 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:07.647 12:50:14 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 4030350 ']' 00:47:07.647 12:50:14 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:07.647 12:50:14 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:07.647 12:50:14 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:07.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:07.647 12:50:14 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:07.647 12:50:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:07.647 [2024-12-10 12:50:14.424492] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:47:07.647 [2024-12-10 12:50:14.424589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030350 ] 00:47:07.906 [2024-12-10 12:50:14.536073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:07.907 [2024-12-10 12:50:14.640426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:08.843 12:50:15 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:08.843 [2024-12-10 12:50:15.456086] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:08.843 null0 00:47:08.843 [2024-12-10 12:50:15.488105] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:08.843 [2024-12-10 12:50:15.488463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:08.843 12:50:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:08.843 486399245 00:47:08.843 12:50:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:08.843 406251320 00:47:08.843 12:50:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4030437 00:47:08.843 12:50:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:08.843 12:50:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4030437 /var/tmp/bperf.sock 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 4030437 ']' 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:08.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:08.843 12:50:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:08.843 [2024-12-10 12:50:15.584547] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:47:08.844 [2024-12-10 12:50:15.584635] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030437 ] 00:47:09.103 [2024-12-10 12:50:15.697562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:09.103 [2024-12-10 12:50:15.803674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:09.671 12:50:16 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:09.671 12:50:16 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:09.671 12:50:16 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:09.671 12:50:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:09.930 12:50:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:09.930 12:50:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:10.294 12:50:17 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:10.294 12:50:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:10.577 [2024-12-10 12:50:17.231646] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:10.577 nvme0n1 00:47:10.577 12:50:17 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:10.577 12:50:17 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:10.577 12:50:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:10.577 12:50:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:10.577 12:50:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:10.577 12:50:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:10.836 12:50:17 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:10.836 12:50:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:10.836 12:50:17 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:10.836 12:50:17 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:10.836 12:50:17 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:10.836 12:50:17 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:10.836 12:50:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:11.095 12:50:17 keyring_linux -- keyring/linux.sh@25 -- # sn=486399245 00:47:11.095 12:50:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:11.095 12:50:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:11.095 12:50:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 486399245 == \4\8\6\3\9\9\2\4\5 ]] 00:47:11.095 12:50:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 486399245 00:47:11.095 12:50:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:11.095 12:50:17 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:11.095 Running I/O for 1 seconds... 00:47:12.033 16291.00 IOPS, 63.64 MiB/s 00:47:12.033 Latency(us) 00:47:12.033 [2024-12-10T11:50:18.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:12.033 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:12.033 nvme0n1 : 1.01 16292.89 63.64 0.00 0.00 7822.83 6647.22 18599.74 00:47:12.033 [2024-12-10T11:50:18.859Z] =================================================================================================================== 00:47:12.033 [2024-12-10T11:50:18.859Z] Total : 16292.89 63.64 0.00 0.00 7822.83 6647.22 18599.74 00:47:12.033 { 00:47:12.033 "results": [ 00:47:12.033 { 00:47:12.033 "job": "nvme0n1", 00:47:12.033 "core_mask": "0x2", 00:47:12.033 "workload": "randread", 00:47:12.033 "status": "finished", 00:47:12.033 "queue_depth": 128, 00:47:12.033 "io_size": 4096, 00:47:12.033 "runtime": 1.00774, 00:47:12.033 "iops": 16292.893008117173, 00:47:12.033 "mibps": 63.64411331295771, 00:47:12.033 "io_failed": 0, 00:47:12.033 "io_timeout": 0, 00:47:12.033 "avg_latency_us": 7822.830535819419, 00:47:12.033 "min_latency_us": 6647.222857142857, 00:47:12.033 "max_latency_us": 18599.74095238095 00:47:12.033 } 00:47:12.033 ], 00:47:12.033 "core_count": 1 00:47:12.033 } 00:47:12.033 12:50:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:12.033 12:50:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:12.292 12:50:19 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:12.292 12:50:19 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:12.292 12:50:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:12.292 12:50:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:12.292 12:50:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:12.292 12:50:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:12.551 12:50:19 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:12.551 12:50:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:12.551 12:50:19 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:12.551 12:50:19 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:12.551 12:50:19 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:47:12.551 12:50:19 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:12.551 12:50:19 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:12.551 12:50:19 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:12.551 12:50:19 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:12.551 12:50:19 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:12.551 12:50:19 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:12.551 12:50:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:12.811 [2024-12-10 12:50:19.392352] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:12.811 [2024-12-10 12:50:19.392696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (107): Transport endpoint is not connected 00:47:12.811 [2024-12-10 12:50:19.393678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (9): Bad file descriptor 00:47:12.811 [2024-12-10 12:50:19.394676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:12.811 [2024-12-10 12:50:19.394695] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:12.811 [2024-12-10 12:50:19.394718] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:12.811 [2024-12-10 12:50:19.394730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:12.811 request: 00:47:12.811 { 00:47:12.811 "name": "nvme0", 00:47:12.811 "trtype": "tcp", 00:47:12.811 "traddr": "127.0.0.1", 00:47:12.811 "adrfam": "ipv4", 00:47:12.811 "trsvcid": "4420", 00:47:12.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:12.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:12.811 "prchk_reftag": false, 00:47:12.811 "prchk_guard": false, 00:47:12.811 "hdgst": false, 00:47:12.811 "ddgst": false, 00:47:12.811 "psk": ":spdk-test:key1", 00:47:12.811 "allow_unrecognized_csi": false, 00:47:12.811 "method": "bdev_nvme_attach_controller", 00:47:12.811 "req_id": 1 00:47:12.811 } 00:47:12.811 Got JSON-RPC error response 00:47:12.811 response: 00:47:12.811 { 00:47:12.811 "code": -5, 00:47:12.811 "message": "Input/output error" 00:47:12.811 } 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@33 -- # sn=486399245 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 486399245 00:47:12.811 1 links removed 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@33 -- # sn=406251320 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 406251320 00:47:12.811 1 links removed 00:47:12.811 12:50:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4030437 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 4030437 ']' 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 4030437 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4030437 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4030437' 00:47:12.811 killing process with pid 4030437 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@973 -- # kill 4030437 00:47:12.811 Received shutdown signal, test time was about 1.000000 seconds 00:47:12.811 00:47:12.811 Latency(us) 00:47:12.811 [2024-12-10T11:50:19.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:12.811 [2024-12-10T11:50:19.637Z] =================================================================================================================== 00:47:12.811 [2024-12-10T11:50:19.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:12.811 12:50:19 keyring_linux -- common/autotest_common.sh@978 -- # wait 4030437 00:47:13.749 12:50:20 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4030350 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 4030350 ']' 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 4030350 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4030350 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4030350' 00:47:13.749 killing process with pid 4030350 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@973 -- # kill 4030350 00:47:13.749 12:50:20 keyring_linux -- common/autotest_common.sh@978 -- # wait 4030350 00:47:16.287 00:47:16.287 real 0m8.724s 00:47:16.287 user 0m14.269s 00:47:16.287 sys 0m1.603s 00:47:16.287 12:50:22 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:16.287 12:50:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:16.287 ************************************ 00:47:16.287 END TEST keyring_linux 00:47:16.287 ************************************ 00:47:16.287 12:50:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:47:16.287 12:50:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:16.287 12:50:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:16.287 12:50:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:47:16.287 12:50:22 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:47:16.287 12:50:22 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:47:16.287 12:50:22 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:47:16.287 12:50:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:16.287 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:47:16.287 12:50:22 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:47:16.287 12:50:22 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:47:16.287 12:50:22 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:47:16.287 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:47:21.559 INFO: APP EXITING 00:47:21.559 INFO: killing all VMs 00:47:21.559 INFO: killing vhost app 00:47:21.559 INFO: EXIT DONE 00:47:22.937 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:47:22.937 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:47:23.196 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:47:26.486 Cleaning 00:47:26.486 Removing: /var/run/dpdk/spdk0/config 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:26.486 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:26.486 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:26.486 Removing: /var/run/dpdk/spdk1/config 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:26.486 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:26.486 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:26.486 Removing: /var/run/dpdk/spdk2/config 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:26.486 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:26.486 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:26.486 Removing: /var/run/dpdk/spdk3/config 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:26.486 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:26.486 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:26.486 Removing: /var/run/dpdk/spdk4/config 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:26.486 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:26.486 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:26.486 Removing: /dev/shm/bdev_svc_trace.1 00:47:26.486 Removing: /dev/shm/nvmf_trace.0 00:47:26.486 Removing: /dev/shm/spdk_tgt_trace.pid3440865 00:47:26.486 Removing: /var/run/dpdk/spdk0 00:47:26.486 Removing: /var/run/dpdk/spdk1 00:47:26.486 Removing: /var/run/dpdk/spdk2 00:47:26.487 Removing: /var/run/dpdk/spdk3 00:47:26.487 Removing: /var/run/dpdk/spdk4 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3437058 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3438550 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3440865 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3441825 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3443180 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3443661 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3445054 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3445283 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3445874 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3447774 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3449206 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3449989 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3450728 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3451476 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3452204 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3452465 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3452706 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3452995 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3453950 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3457315 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3458021 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3458849 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3459073 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3461126 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3461292 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3463111 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3463335 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3463832 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3464058 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3464720 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3464816 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3466429 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3466676 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3466984 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3471278 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3475738 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3485991 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3486664 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3491080 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3491541 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3496181 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3502173 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3505660 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3516713 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3525773 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3527684 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3528794 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3546350 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3550574 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3636042 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3641447 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3647414 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3657364 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3686430 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3691080 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3692847 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3694785 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3695105 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3695553 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3695796 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3696754 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3698749 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3700377 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3701104 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3703713 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3705030 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3706012 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3710392 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3716120 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3716121 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3716122 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3720050 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3723964 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3728852 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3765655 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3770096 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3776251 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3778229 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3780432 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3782995 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3787932 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3792966 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3797306 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3805175 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3805182 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3810028 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3810253 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3810480 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3810928 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3810935 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3812503 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3814063 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3815714 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3817382 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3818950 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3820550 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3827186 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3827739 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3829643 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3830658 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3836701 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3839495 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3845129 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3850585 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3859618 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3866939 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3867025 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3885854 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3886561 00:47:26.487 Removing: /var/run/dpdk/spdk_pid3887445 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3888157 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3889524 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3890206 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3890895 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3891664 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3896180 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3896520 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3902794 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3903001 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3908476 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3912979 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3922949 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3923613 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3927795 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3928258 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3932664 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3938614 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3941351 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3951971 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3961368 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3963240 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3964144 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3980855 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3985041 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3987925 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3995913 00:47:26.746 Removing: /var/run/dpdk/spdk_pid3995924 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4001052 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4003504 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4005637 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4006878 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4009006 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4010411 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4019265 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4019712 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4020197 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4022795 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4023333 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4023888 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4027509 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4027669 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4029371 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4030350 00:47:26.746 Removing: /var/run/dpdk/spdk_pid4030437 00:47:26.746 Clean 00:47:26.746 12:50:33 -- common/autotest_common.sh@1453 -- # return 0 00:47:26.746 12:50:33 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:26.746 12:50:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:26.746 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:47:26.746 12:50:33 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:26.746 12:50:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:26.746 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:47:27.005 12:50:33 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:27.005 12:50:33 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:27.005 12:50:33 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:27.005 12:50:33 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:27.006 12:50:33 -- spdk/autotest.sh@398 -- # hostname 00:47:27.006 12:50:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:27.006 geninfo: WARNING: invalid characters removed from testname! 00:47:48.941 12:50:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:49.200 12:50:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:51.105 12:50:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:53.010 12:50:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:54.915 12:51:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:56.292 12:51:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:58.198 12:51:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:58.198 12:51:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:47:58.198 12:51:04 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:47:58.198 12:51:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:58.198 12:51:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:58.198 12:51:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:58.198 + [[ -n 3361210 ]] 00:47:58.198 + sudo kill 3361210 00:47:58.208 [Pipeline] } 00:47:58.223 [Pipeline] // stage 00:47:58.229 [Pipeline] } 00:47:58.242 [Pipeline] // timeout 00:47:58.247 [Pipeline] } 00:47:58.260 [Pipeline] // catchError 00:47:58.265 [Pipeline] } 00:47:58.280 [Pipeline] // wrap 00:47:58.286 [Pipeline] } 00:47:58.298 [Pipeline] // catchError 00:47:58.307 [Pipeline] stage 00:47:58.309 [Pipeline] { (Epilogue) 00:47:58.321 [Pipeline] catchError 00:47:58.322 [Pipeline] { 00:47:58.334 [Pipeline] echo 00:47:58.336 Cleanup processes 00:47:58.342 [Pipeline] sh 00:47:58.627 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:58.627 4042954 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:58.640 [Pipeline] sh 00:47:58.924 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:58.924 ++ grep -v 'sudo pgrep' 00:47:58.924 ++ awk '{print $1}' 00:47:58.924 + sudo kill -9 00:47:58.924 + true 00:47:58.936 [Pipeline] sh 00:47:59.218 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:11.442 [Pipeline] sh 00:48:11.727 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:11.727 Artifacts sizes are good 00:48:11.742 [Pipeline] archiveArtifacts 00:48:11.749 Archiving artifacts 00:48:11.889 [Pipeline] sh 00:48:12.177 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:12.192 [Pipeline] cleanWs 00:48:12.202 [WS-CLEANUP] Deleting project workspace... 00:48:12.202 [WS-CLEANUP] Deferred wipeout is used... 00:48:12.209 [WS-CLEANUP] done 00:48:12.211 [Pipeline] } 00:48:12.228 [Pipeline] // catchError 00:48:12.239 [Pipeline] sh 00:48:12.524 + logger -p user.info -t JENKINS-CI 00:48:12.533 [Pipeline] } 00:48:12.545 [Pipeline] // stage 00:48:12.550 [Pipeline] } 00:48:12.564 [Pipeline] // node 00:48:12.569 [Pipeline] End of Pipeline 00:48:12.608 Finished: SUCCESS